I’m Will Crouch. Other than one other, this is my first comment on LW. However, I know and respect many people within the LW community.
I’m a DPhil student in moral philosophy at Oxford, though I’m currently visiting Princeton. I work on moral uncertainty: on whether one can apply expected utility theory in cases where one is uncertain about what is of value, or what one one ought to do. It’s difficult to do so, but I argue that you can.
I got to know people in the LW community because I co-founded two organisations, Giving What We Can and 80,000 Hours, dedicated to the idea of effective altruism: that is, using one’s marginal resources in whatever way the evidence supports as doing the most good. A lot of LW members support the aims of these organisations.
I woudn’t call myself a ‘rationalist’ without knowing a lot more about what that means. I do think that Bayesian epistemology is the best we’ve got, and that rational preferences should conform to the von Neumann-Morgenstern axioms (though I’m uncertain—there are quite a lot of difficulties for that view). I think that total hedonistic utilitarianism is the most plausible moral theory, but I’m extremely uncertain in that conclusion, partly on the basis that most moral philosophers and other people in the world disagree with me. I think that the more important question is what credence distribution one ought to have across moral theories, and how one ought to act given that credence distribution, rather than what moral theory one ‘adheres’ to (whatever that means).
Pretense that this comment has a purpose other than squeeing at you like a 12-year-old fangirl: what arguments make you prefer total utilitarianism to average?
Haha! I don’t think I’m worthy of squeeing, but thank you all the same.
In terms of the philosophy, I think that average utilitarianism is hopeless as a theory of population ethics. Consider the following case:
Population A: 1 person exists, with a life full of horrific suffering. Her utility is −100.
Population B: 100 billion people exist, each with lives full of horrific suffering. Each of their utility levels is −99.9
Average utilitarianism says that Population B is better than Population A. That definitely seems wrong to me: bringing into existence people whose lives aren’t worth living just can’t be a good thing.
That’s not obvious to me. IMO, the reason why in the real world “bringing into existence people whose lives aren’t worth living just can’t be a good thing” is that they consume resources that other people could use instead; but if in the hypothetical you fix the utility of each person by hand, that doesn’t apply to the hypothetical.
I haven’t thought about these things that much, but my current position is that average utilitarianism is not actually absurd—the absurd results of thought experiments are due to the fact that those thought experiments ignore the fact that people interact with each other.
I don’t understand your comment. Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don’t think that such a world would be better, then you must agree that average utilitarianism is false.
Here’s another, even more obviously decisive, counterexample to average utilitariainsm. Consider a world A in which people experience nothing but agonizing pain. Consider next a different world B which contains all the people in A, plus arbitrarily many more people all experiencing pain only slightly less intense. Since the average pain in B is less than the average pain in A, average utilitarianism implies that B is better than A. This is clearly absurd, since B differs from A only in containing a surplus of agony.
Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don’t think that such a world would be better, then you must agree that average utilitarianism is false.
I do think that the former is better (to the extent that I can trust my intuitions in a case that different from those in their training set).
Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.
“Separability” of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it’s a good thing to bring a new person into existence depends only on facts about that person (assuming they don’t have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn’t be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it’s good or bad to bring that person into existence.
But, let’s return to the intuitive case above, and make it a little stronger.
Now suppose:
Population A: 1 person suffering a lot (utility −10)
Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering −9.9.
Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone’s already horrific life, in order to bring into existence many other people with horrific lives.
Do you still get the intuition in favour of average here?
Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human—as in, in pop A you will get utility −10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward “maximize expected util of ‘being someone in this world’”, or something like “suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being’s utility”. Such perspectives would give the “non-intuitive” result in these sorts of thought experiments.
Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?
Perhaps! Though I certainly didn’t intend to imply that this was a selfish calculation—one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.
assuming they don’t have any causal effects on other people
Once you make such an unrealistic assumption, the conclusions won’t necessarily be non-unrealistic. (If you assume water has no viscosity, you can conclude that it exerts no drag on stuff moving in it.) In particular, ISTM that as long as my basic physiological needs are met, my utility almost exclusively depends on interacting with other people, playing with toys invented by other people, reading stuff written by other people, listening to music by other people, etc.
When discussing such questions, we need to be careful to distinguish the following:
Is a world containing population B better than a world containing population A?
If a world with population A already existed, would it be moral to turn it into a world with population B?
If Omega offered me a choice between a world with population A and a world with population B, and I had to choose one of them, knowing that I’d live somewhere in the world, but not who I’d be, would I choose population B?
I am inclined to give different answers to these questions. Similarly for Parfit’s repugnant conclusion; the exact phrasing of the question could lead to different answers.
Another issue is background populations, which turn out to matter enormously for average utilitarianism. Suppose the world already contains a very large number of people wth average utility 10 (off in distant galaxies say) and call this population C. Then the combination of B+C has lower average utility than A+C, and gets a clear negative answer on all the questions, so matching your intuition.
I suspect that this is the situation we’re actually in: a large, maybe infinite, population elsewhere that we can’t do anything about, and whose average utility is unknown. In that case, it is unclear whether average utilitarianism tells us to increase or decrease the Earth’s population, and we can’t make a judgement one way or another.
Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more.
While I am not an average utilititarian, (I think,) A world containing only one person suffering horribly does seem kinda worse.
So, the difference is that in one world there are many people, rather than one person, suffering horribly. How on Earth can this difference make the former world better than the latter?!
Suppose I publicly endorse a moral theory which implies that the more headaches someone has, the better the world becomes. Suppose someone asks me to explain my rationale for claiming that a world that contains more headaches is better. Suppose I reply by saying, “Because in this world, more people suffer headaches.”
Thanks! Yes, I’m good friends with Nick and Toby. My view on their model is as follows. Sometimes intertheoretic value comparisons are possible: that is, we can make sense of the idea that the difference in value (or wrongness) between two options A and B one one moral theory is greater, lesser, or equal to the difference in value (or wrongness) between two options C and D on another moral theory. So, for example, you might think that killing one person in order to save a slightly less happy person is much more wrong according to a rights-based moral view than it is according to utilitarianism (even though it’s wrong according to both theories). If we can make such comparisons, then we don’t need the parliamentary model: we can just use expected utility theory.
Sometimes, though, it seems that such comparisons aren’t possible. E.g. I add one person whose life isn’t worth living to the population. Is that more wrong according to total utilitarianism or average utilitarianism? I have no idea. When such comparisons aren’t possible, then I think that something like the parliamentary model is the right way to go. But, as it stands, the parliamentary model is more of a suggestion than a concrete proposal. In terms of the best specific formulation, I think that you should normalise incomparable theories at the variance of their respective utility functions, and then just maximise expected value. Owen Cotton-Barratt convinced me of that!
Sorry if that was a bit of a complex response to a simple question!
I woudn’t call myself a ‘rationalist’ without knowing a lot more about what that means.
I think most LWer’s would agree that; “Anyone who tries to practice rationality as defined on Less Wrong.” is a passible description of what we mean by ‘rationalist’.
Thanks for that. I guess that means I’m not a rationalist! I try my best to practice (1). But I only contingently practice (2). Even if I didn’t care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering. I.e. I do what I do not because it’s what I happen to value, but because I think it’s objectively valuable (and if you value something else, like promoting suffering, then I think you’re mistaken!) That is, I’m a moral realist. Whereas the definition given in Eliezer’s post suggests that being a rationalist presupposes moral anti-realism. When I talk with other LW-ers, this often seems to be a point of disagreement, so I hope I’m not just being pedantic!
Whereas the definition given in Eliezer’s post suggests that being a rationalist presupposes moral anti-realism
Not at all. (Eliezer is a sort of moral realist). It would be weird if you said “I’m a moral realist, but I don’t value things that I know are objectively valuable”.
It doesn’t really matter whether you’re a moral realist or not—instrumental rationality is about achieving your goals, whether they’re good goals or not. Just like math lets you crunch numbers, whether they’re real statistics or made up. But believing you shouldn’t make up statistics doesn’t therefore mean you don’t do math.
Sorting Pebbles Into Correct Heaps notes that ‘right’ is the same sort of thing as ‘prime’ - it refers to a particular abstraction that is independent of anyone’s say-so.
Though Eliezer is also a sort of moral subjectivist; if we were built differently, we would be using the word ‘right’ to refer to a different abstraction.
Really, this is just shoehorning Eliezer’s views into philosophical debates that he isn’t involved in.
“It doesn’t really matter whether you’re a moral realist or not—instrumental rationality is about achieving your goals, whether they’re good goals or not.”
It seems to me that moral realism is an epistemic claim—it is a statement about how the world is—or could be—and that is definitely a matter that impinges on rationality.
Even if I didn’t care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering.
This seems to be similar to Eliezer’s beliefs. Relevant quote from Harry Potter and the Methods of Rationality:
“No,” Professor Quirrell said. His fingers rubbed the bridge of his nose. “I don’t think that’s quite what I was trying to say. Mr. Potter, in the end people all do what they want to do. Sometimes people give names like ‘right’ to things they want to do, but how could we possibly act on anything but our own desires?”
“Well, obviously,” Harry said. “I couldn’t act on moral considerations if they lacked the power to move me. But that doesn’t mean my wanting to hurt those Slytherins has the power to move me more than moral considerations!”
I don’t think that’s what Harry is saying there. Your quote from HPMOR seems to me to be more about the recognition that moral considerations are only one aspect of a decision-making process (in humans, anyway), and that just because that is true doesn’t mean that moral considerations won’t have an effect.
Hi All,
I’m Will Crouch. Other than one other, this is my first comment on LW. However, I know and respect many people within the LW community.
I’m a DPhil student in moral philosophy at Oxford, though I’m currently visiting Princeton. I work on moral uncertainty: on whether one can apply expected utility theory in cases where one is uncertain about what is of value, or what one one ought to do. It’s difficult to do so, but I argue that you can.
I got to know people in the LW community because I co-founded two organisations, Giving What We Can and 80,000 Hours, dedicated to the idea of effective altruism: that is, using one’s marginal resources in whatever way the evidence supports as doing the most good. A lot of LW members support the aims of these organisations.
I woudn’t call myself a ‘rationalist’ without knowing a lot more about what that means. I do think that Bayesian epistemology is the best we’ve got, and that rational preferences should conform to the von Neumann-Morgenstern axioms (though I’m uncertain—there are quite a lot of difficulties for that view). I think that total hedonistic utilitarianism is the most plausible moral theory, but I’m extremely uncertain in that conclusion, partly on the basis that most moral philosophers and other people in the world disagree with me. I think that the more important question is what credence distribution one ought to have across moral theories, and how one ought to act given that credence distribution, rather than what moral theory one ‘adheres’ to (whatever that means).
Pretense that this comment has a purpose other than squeeing at you like a 12-year-old fangirl: what arguments make you prefer total utilitarianism to average?
Haha! I don’t think I’m worthy of squeeing, but thank you all the same.
In terms of the philosophy, I think that average utilitarianism is hopeless as a theory of population ethics. Consider the following case:
Population A: 1 person exists, with a life full of horrific suffering. Her utility is −100.
Population B: 100 billion people exist, each with lives full of horrific suffering. Each of their utility levels is −99.9
Average utilitarianism says that Population B is better than Population A. That definitely seems wrong to me: bringing into existence people whose lives aren’t worth living just can’t be a good thing.
That’s not obvious to me. IMO, the reason why in the real world “bringing into existence people whose lives aren’t worth living just can’t be a good thing” is that they consume resources that other people could use instead; but if in the hypothetical you fix the utility of each person by hand, that doesn’t apply to the hypothetical.
I haven’t thought about these things that much, but my current position is that average utilitarianism is not actually absurd—the absurd results of thought experiments are due to the fact that those thought experiments ignore the fact that people interact with each other.
I don’t understand your comment. Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don’t think that such a world would be better, then you must agree that average utilitarianism is false.
Here’s another, even more obviously decisive, counterexample to average utilitariainsm. Consider a world A in which people experience nothing but agonizing pain. Consider next a different world B which contains all the people in A, plus arbitrarily many more people all experiencing pain only slightly less intense. Since the average pain in B is less than the average pain in A, average utilitarianism implies that B is better than A. This is clearly absurd, since B differs from A only in containing a surplus of agony.
I do think that the former is better (to the extent that I can trust my intuitions in a case that different from those in their training set).
Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.
“Separability” of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it’s a good thing to bring a new person into existence depends only on facts about that person (assuming they don’t have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn’t be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it’s good or bad to bring that person into existence.
But, let’s return to the intuitive case above, and make it a little stronger.
Now suppose:
Population A: 1 person suffering a lot (utility −10)
Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering −9.9.
Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone’s already horrific life, in order to bring into existence many other people with horrific lives.
Do you still get the intuition in favour of average here?
Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human—as in, in pop A you will get utility −10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward “maximize expected util of ‘being someone in this world’”, or something like “suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being’s utility”. Such perspectives would give the “non-intuitive” result in these sorts of thought experiments.
Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?
Perhaps people simply objected to the implied selfish motivations.
Perhaps! Though I certainly didn’t intend to imply that this was a selfish calculation—one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.
Once you make such an unrealistic assumption, the conclusions won’t necessarily be non-unrealistic. (If you assume water has no viscosity, you can conclude that it exerts no drag on stuff moving in it.) In particular, ISTM that as long as my basic physiological needs are met, my utility almost exclusively depends on interacting with other people, playing with toys invented by other people, reading stuff written by other people, listening to music by other people, etc.
When discussing such questions, we need to be careful to distinguish the following:
Is a world containing population B better than a world containing population A?
If a world with population A already existed, would it be moral to turn it into a world with population B?
If Omega offered me a choice between a world with population A and a world with population B, and I had to choose one of them, knowing that I’d live somewhere in the world, but not who I’d be, would I choose population B?
I am inclined to give different answers to these questions. Similarly for Parfit’s repugnant conclusion; the exact phrasing of the question could lead to different answers.
Another issue is background populations, which turn out to matter enormously for average utilitarianism. Suppose the world already contains a very large number of people wth average utility 10 (off in distant galaxies say) and call this population C. Then the combination of B+C has lower average utility than A+C, and gets a clear negative answer on all the questions, so matching your intuition.
I suspect that this is the situation we’re actually in: a large, maybe infinite, population elsewhere that we can’t do anything about, and whose average utility is unknown. In that case, it is unclear whether average utilitarianism tells us to increase or decrease the Earth’s population, and we can’t make a judgement one way or another.
While I am not an average utilititarian, (I think,) A world containing only one person suffering horribly does seem kinda worse.
Both worlds contain people “suffering horribly”.
One world contains pople suffering horribly. The other contains a person suffering horribly. And no-one else.
So, the difference is that in one world there are many people, rather than one person, suffering horribly. How on Earth can this difference make the former world better than the latter?!
Because it doesn’t contain anyone else. There’s only one human left and they’re “suffering horribly”.
Suppose I publicly endorse a moral theory which implies that the more headaches someone has, the better the world becomes. Suppose someone asks me to explain my rationale for claiming that a world that contains more headaches is better. Suppose I reply by saying, “Because in this world, more people suffer headaches.”
What would you conclude about my sanity?
Most people value humanity’s continued existence.
I’m glad you’re here! Do you have any comments on Nick Bostrom and Toby Ord’s idea for a “parliamentary model” of moral uncertainty?
Thanks! Yes, I’m good friends with Nick and Toby. My view on their model is as follows. Sometimes intertheoretic value comparisons are possible: that is, we can make sense of the idea that the difference in value (or wrongness) between two options A and B one one moral theory is greater, lesser, or equal to the difference in value (or wrongness) between two options C and D on another moral theory. So, for example, you might think that killing one person in order to save a slightly less happy person is much more wrong according to a rights-based moral view than it is according to utilitarianism (even though it’s wrong according to both theories). If we can make such comparisons, then we don’t need the parliamentary model: we can just use expected utility theory.
Sometimes, though, it seems that such comparisons aren’t possible. E.g. I add one person whose life isn’t worth living to the population. Is that more wrong according to total utilitarianism or average utilitarianism? I have no idea. When such comparisons aren’t possible, then I think that something like the parliamentary model is the right way to go. But, as it stands, the parliamentary model is more of a suggestion than a concrete proposal. In terms of the best specific formulation, I think that you should normalise incomparable theories at the variance of their respective utility functions, and then just maximise expected value. Owen Cotton-Barratt convinced me of that!
Sorry if that was a bit of a complex response to a simple question!
Hi Will,
I think most LWer’s would agree that; “Anyone who tries to practice rationality as defined on Less Wrong.” is a passible description of what we mean by ‘rationalist’.
Thanks for that. I guess that means I’m not a rationalist! I try my best to practice (1). But I only contingently practice (2). Even if I didn’t care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering. I.e. I do what I do not because it’s what I happen to value, but because I think it’s objectively valuable (and if you value something else, like promoting suffering, then I think you’re mistaken!) That is, I’m a moral realist. Whereas the definition given in Eliezer’s post suggests that being a rationalist presupposes moral anti-realism. When I talk with other LW-ers, this often seems to be a point of disagreement, so I hope I’m not just being pedantic!
Not at all. (Eliezer is a sort of moral realist). It would be weird if you said “I’m a moral realist, but I don’t value things that I know are objectively valuable”.
It doesn’t really matter whether you’re a moral realist or not—instrumental rationality is about achieving your goals, whether they’re good goals or not. Just like math lets you crunch numbers, whether they’re real statistics or made up. But believing you shouldn’t make up statistics doesn’t therefore mean you don’t do math.
Could you provide a link to a blog post or essay where Eliezer endorses moral realism? Thanks!
Sorting Pebbles Into Correct Heaps notes that ‘right’ is the same sort of thing as ‘prime’ - it refers to a particular abstraction that is independent of anyone’s say-so.
Though Eliezer is also a sort of moral subjectivist; if we were built differently, we would be using the word ‘right’ to refer to a different abstraction.
Really, this is just shoehorning Eliezer’s views into philosophical debates that he isn’t involved in.
“It doesn’t really matter whether you’re a moral realist or not—instrumental rationality is about achieving your goals, whether they’re good goals or not.”
It seems to me that moral realism is an epistemic claim—it is a statement about how the world is—or could be—and that is definitely a matter that impinges on rationality.
This seems to be similar to Eliezer’s beliefs. Relevant quote from Harry Potter and the Methods of Rationality:
I don’t think that’s what Harry is saying there. Your quote from HPMOR seems to me to be more about the recognition that moral considerations are only one aspect of a decision-making process (in humans, anyway), and that just because that is true doesn’t mean that moral considerations won’t have an effect.