Systematically avoiding all situations where you’re risking someone’s life in exchange for a low-importance experience would assemble into a high-importance life-ruining experience for you (starving to death in your apartment, I guess?).
We can easily ban speed above 15km/h for any vehicles except ambulances. Nobody starves to death in this scenario, it’s just very inconvenient. We value convenience lost in this scenario more than lives lost in our reality, so we don’t ban high-speed vehicles.
Ordinal preferences are bad and insane and they are to be avoided.
What’s really wrong with utilitarianism is that you can’t, actually, sum utilities: it’s a type error, because utilities are invariant up to affine transform, what would their sum mean?
The problem, I think, that humans naturally conflate two types of altruism. First type is caring about other entities mental state. Second type is “game-theoretic” or “alignment-theoretic” altruism: generalized notion of what does that mean to care about someone else’s values. Roughly, I think that good type of the second type of altruism requires you to do fair bargaining in interests of entity you are being altruistic towards.
Let’s take “World Z” thought experiment. The problem from the second type altruism perspective is that total utilitarian gets very large utility from this world, while all inhabitants of this world, by premise, get very small utility per person, which is unfair division of gains.
One may object: why not create entities who think that very small share of gains is fair? My answer is that if entity can be satisfied with infinitesimal share of gains, it also can be satisfied with infinitesimal share of anthropic measure, i.e., non-existence, and it’s more altruistic to look for more demanding entities to fill universe with.
My general problem with animal welfare from bargaining perspective is that most of animals probably don’t have sufficient agency to have any sort of representative in bargaining. We can imagine CEV of shrimp which is negative utilitarian and wants to kill all shrimp, or positive utilitarian which thinks that even very painful existence is worth it, or CEV that prefers shrimp swimming in heroin, or something human-like, or something totally alien, and sum of these guesses probably sums up to “do not torture and otherwise do as you please”.
We can easily ban speed above 15km/h for any vehicles except ambulances. Nobody starves to death in this scenario, it’s just very inconvenient. We value convenience lost in this scenario more than lives lost in our reality, so we don’t ban high-speed vehicles.
Ordinal preferences are bad and insane and they are to be avoided.
What’s really wrong with utilitarianism is that you can’t, actually, sum utilities: it’s a type error, because utilities are invariant up to affine transform, what would their sum mean?
The problem, I think, that humans naturally conflate two types of altruism. First type is caring about other entities mental state. Second type is “game-theoretic” or “alignment-theoretic” altruism: generalized notion of what does that mean to care about someone else’s values. Roughly, I think that good type of the second type of altruism requires you to do fair bargaining in interests of entity you are being altruistic towards.
Let’s take “World Z” thought experiment. The problem from the second type altruism perspective is that total utilitarian gets very large utility from this world, while all inhabitants of this world, by premise, get very small utility per person, which is unfair division of gains.
One may object: why not create entities who think that very small share of gains is fair? My answer is that if entity can be satisfied with infinitesimal share of gains, it also can be satisfied with infinitesimal share of anthropic measure, i.e., non-existence, and it’s more altruistic to look for more demanding entities to fill universe with.
My general problem with animal welfare from bargaining perspective is that most of animals probably don’t have sufficient agency to have any sort of representative in bargaining. We can imagine CEV of shrimp which is negative utilitarian and wants to kill all shrimp, or positive utilitarian which thinks that even very painful existence is worth it, or CEV that prefers shrimp swimming in heroin, or something human-like, or something totally alien, and sum of these guesses probably sums up to “do not torture and otherwise do as you please”.