Yeah, I linked to Tomasik’s earlier musings on this a while back in a comment.
I must say I am very impressed by this kind of negative-utilitarian reasoning, as it has captured a concern of mine that I once naively assumed to be unquantifiable by utilitarian ethics. There might be many plausible future worlds where scenarios like “Omelas” or “SCP-231″ would be the norm, possibly with (trans)humanity acquiescing to them or perpetuating them for a rational reason. What’s worse, such futures might not even be acknowledged as disastrous/Unfriendly by people contemplating the perspective. Consider the perspective of transhuman values simply diverging so widely that some groups in a would-be “libertarian utopia” would perpetrate things (to their own unwilling members or other sentinents) which the rest of us would find abhorrent—yet the only way to influence such groups could be by aggression and total non-cooperation. Which might not be viable for the objecting factions due to game-theoretical reasons (avoiding a “cascade of defection”), ideological motives or an insufficient capability to project military force. See Three Worlds Collide for some ways this might plausibly play out.
Brian is, so far, the only utilitarian thinker I’ve read who even mentions Omelas as a potential grave problem, along with more standard transhumanist concerns such as em slavery or “suffering subroutines”. I agree with the implications that he draws. I would further add that an excessive focus on reducing X-risk (and, indeed, on ensuring security and safety of all kinds) could have very scary present-day political implications, not just future ones.
(Which is why I am so worried and outspoken about the growth of a certain socio-political ideology among transhumanists and tech geeks; X-risk even features in some of the arguments for it that I’ve read—although much of it can be safely dismissed as self-serving fearmongering and incoherent apocalyptic fantasies.)
I must say I am very impressed by this kind of negative-utilitarian reasoning, as it has captured a concern of mine that I once naively assumed to be unquantifiable by utilitarian ethics
Do you mean that given certain comparisons of outcomes A and B, you agree with its ranking? Or that it captures your reasons? The latter seems dubious, unless you mean you buy negative utilitarianism wholesale.
If you don’t care about anything good, then you don’t have to worry about accepting smaller bads to achieve larger goods, but that goes far beyond “throwing out the baby with the bathwater.” Toby Ord gives some of the usual counterexamples.
If you’re concerned about deontological tradeoffs as in those stories, a negative utilitarian of that stripe would eagerly torture any finite number of people if that would kill a sufficiently larger population that suffers even occasional minor pains in lives that are overall quite good.
This seems to presuppose “good” being synonymous with “pleasurable conscious states”. Referring to broader (and less question-begging) definitions for “good” like e.g. “whatever states of the world I want to bring about” or “whatever is in accordance with other-regarding reasons for actions”, negative utilitarians would simply deny that pleasurable consciousness-states fulfill the criterion (or that they fulfill it better than non-existence or hedonically neutral flow-states).
Ord concludes that negative utilitarianism leads to outcomes where “everyone is worse off”, but this of course also presupposes an axiology that negative utilitarians would reject. Likewise, it wouldn’t be a fair criticism of classical utilitarianism to say that the very repugnant conclusion leaves everyone worse off (even though from a negative or prior-existence kind of perspective it seems like it), because at least according to the classical utilitarians themselves, existing slightly above “worth living” is judged better than non-existence.
Yeah, I linked to Tomasik’s earlier musings on this a while back in a comment.
I must say I am very impressed by this kind of negative-utilitarian reasoning, as it has captured a concern of mine that I once naively assumed to be unquantifiable by utilitarian ethics. There might be many plausible future worlds where scenarios like “Omelas” or “SCP-231″ would be the norm, possibly with (trans)humanity acquiescing to them or perpetuating them for a rational reason.
What’s worse, such futures might not even be acknowledged as disastrous/Unfriendly by people contemplating the perspective. Consider the perspective of transhuman values simply diverging so widely that some groups in a would-be “libertarian utopia” would perpetrate things (to their own unwilling members or other sentinents) which the rest of us would find abhorrent—yet the only way to influence such groups could be by aggression and total non-cooperation. Which might not be viable for the objecting factions due to game-theoretical reasons (avoiding a “cascade of defection”), ideological motives or an insufficient capability to project military force. See Three Worlds Collide for some ways this might plausibly play out.
Brian is, so far, the only utilitarian thinker I’ve read who even mentions Omelas as a potential grave problem, along with more standard transhumanist concerns such as em slavery or “suffering subroutines”. I agree with the implications that he draws. I would further add that an excessive focus on reducing X-risk (and, indeed, on ensuring security and safety of all kinds) could have very scary present-day political implications, not just future ones.
(Which is why I am so worried and outspoken about the growth of a certain socio-political ideology among transhumanists and tech geeks; X-risk even features in some of the arguments for it that I’ve read—although much of it can be safely dismissed as self-serving fearmongering and incoherent apocalyptic fantasies.)
Do you mean that given certain comparisons of outcomes A and B, you agree with its ranking? Or that it captures your reasons? The latter seems dubious, unless you mean you buy negative utilitarianism wholesale.
If you don’t care about anything good, then you don’t have to worry about accepting smaller bads to achieve larger goods, but that goes far beyond “throwing out the baby with the bathwater.” Toby Ord gives some of the usual counterexamples.
If you’re concerned about deontological tradeoffs as in those stories, a negative utilitarian of that stripe would eagerly torture any finite number of people if that would kill a sufficiently larger population that suffers even occasional minor pains in lives that are overall quite good.
This seems to presuppose “good” being synonymous with “pleasurable conscious states”. Referring to broader (and less question-begging) definitions for “good” like e.g. “whatever states of the world I want to bring about” or “whatever is in accordance with other-regarding reasons for actions”, negative utilitarians would simply deny that pleasurable consciousness-states fulfill the criterion (or that they fulfill it better than non-existence or hedonically neutral flow-states).
Ord concludes that negative utilitarianism leads to outcomes where “everyone is worse off”, but this of course also presupposes an axiology that negative utilitarians would reject. Likewise, it wouldn’t be a fair criticism of classical utilitarianism to say that the very repugnant conclusion leaves everyone worse off (even though from a negative or prior-existence kind of perspective it seems like it), because at least according to the classical utilitarians themselves, existing slightly above “worth living” is judged better than non-existence.