There is no natural scale on which to compare utility functions. [...] Unless your theory comes with a particular [interpersonal utility comparison] method, the only way of summing these utilities is to do an essentially arbitrary choice for each individual before summing. Thus standard total utilitarianism is an arbitrary sum of ill defined, non-natural objects.
This, in my opinion, is by itself a decisive argument against utilitarianism. Without these ghostly “utilities” that are supposed to be measurable and comparable interpersonally, the whole concept doesn’t even being to make sense. And yet the problem is commonly ignored routinely and nonchalantly, even here, where people pride themselves on fearless and consistent reductionism.
Note that the problem is much more fundamental than just the mathematical difficulties and counter-intuitive implications of formal utilitarian theories. Even if there were no such problems, it would still be the case that the whole theory rests on an entirely imaginary foundation. Ultimately, it’s a system that postulates some metaphysical entities and a categorical moral imperative stated in terms of the supposed state of these entities. Why would we privilege that over systems that postulate metaphysical entities and associated categorical imperatives of different kinds, like e.g. traditional religions?
(If someone believes that there is a way how these interpersonally comparable utilities could actually be grounded in physical reality, I’d be extremely curious to hear it.)
(If someone believes that there is a way how these interpersonally comparable utilities could actually be grounded in physical reality, I’d be extremely curious to hear it.)
I asked about this before in the context of one of Julia Galef’s posts about utilitarian puzzles and received several responses. What is your evaluation of the responses (personally, I was very underwhelmed)?
The only reasonable attempt at a response in that sub-thread is this comment. I don’t think the argument works, though. The problem is not just disagreement between different people’s intuitions, but also the fact that humans don’t do anything like utility comparisons when it comes to decisions that affect other people. What people do in reality is intuitive folk ethics, which is basically virtue ethics, and has very little concern with utility comparisons.
That said, there are indeed some intuitions about utility comparison, but they are far too weak, underspecified, and inconsistent to serve as basis for extracting an interpersonal utility function, even if we ignore disagreements between people.
There is the oft-repeated anecdote of the utilitarian moral philosopher weighing up whether to accept a job at Columbia. It would get more money, but it would uproot his family, but it might help his career… familiar kind of moral dilemma. Asking his colleague for advice, he got told “Just maximise total utility.” “Come on,” he is supposed to have replied, “this is serious!”
I struggle to think of any moral dilemma I have faced where utilitarian ethics even provide a practical framework for addressing the problem, let alone a potential answer.
That anecdote is about a decision theorist, not a moral philosopher. The dilemma you describe is a decision theoretic one, not a moral utilitarian one.
Sure, but “costs” and “benefits” are themselves value-laden terms, which depend on the ethical framework you are using. And then comparing the costs and the benefits is itself value-laden.
In other words, people using non-utilitarian ethics can get plenty of value out of writing down costs and benefits. And people using utilitarian ethics don’t necessarily get much value (doesn’t really help the philosopher in the anecdote). This is therefore not an example of how utilitarian ethics are useful.
Writing down costs and benefits is clearly an application of consequentialist ethics, unless things are so muddied that any action might be an example of any ethic. Consequentialist ethics need not be utilitarian, true, but they are usually pretty close to utilitarian. Certainly closer to utilitarianism than to virtue ethics.
Writing down costs and benefits is clearly an application of consequentialism ethics.
No, because “costs” and “benefits” are value-laden terms.
Suppose I am facing a standard moral dilemma; should I give my brother proper funerary rites, even though the city’s ruler has forbidden it. So I take your advice and write down costs and benefits. Costs—breaching my duty to obey the law, punishment for me, possible reigniting of the city’s civil war. Benefits—upholding my duty to my family, proper funeral rites for my brother, restored honour. By writing this down I haven’t committed to any ethical system, all I’ve done is clarify what’s at stake. For example, if I’m a deontologist, perhaps this helps clarify that it comes down to duty to the law versus duty to my family. If I’m a virtue ethicist, perhaps this shows it’s about whether I want to be the kind of person who is loyal to their family above tawdry concerns of politics, or the kind of person who is willing to put their city above petty personal concerns. This even works if I’m just an egoist with no ethics; is the suffering of being imprisoned in a cave greater or less than the suffering I’ll experience knowing my brother’s corpse is being eaten by crows?
Ironically, the only person this doesn’t help is the utilitarian, because he has absolutely no way of comparing the costs and the benefits—“maximise utility” is a slogan, not a procedure.
What are you arguing here? First you argue that “just maximize utility” is not enough to make a decision. This is of course true, since utilitarianism is not a fully specified theory. There are many different utilitarian systems of ethics, just as there are many different deontological ethics and many different egoist ethics.
Second you are arguing that working out the costs and benefits is not an indicator of consequentialism. Perhaps this is not perfectly true, but if you follow these arguments to their conclusion then basically nothing is an indicator of any ethical system. Writing a list of costs and benefits, as these terms are usually understood, focuses one’s attention on the consequences of the action rather than the reasons for the action (as the virtue ethicists care about) or the rules mandating or forbidding an action (as the deontologists care about). Yes, the users of different ethical theories can use pretty much any tool to help them decide, but some tools are more useful for some theories because they push your thinking into the directions that theory considers relevant.
I am thinking about petty personal disputes, say if one person finds something that another person does annoying. A common gut reaction is to immediately start staking territory about what is just and what is virtuous and so on, while the correct thing to do is focus on concrete benefits and costs of actions. The main reason this is better is not because it maximizes utility but because it minimizes argumentativeness.
Another good example is competition for a resource. Sometimes one feels like one deserves a fair share and this is very important, but if you have no special need for it, nor are there significant diminishing marginal returns, then it’s really not that big of a deal.
In general, intuitive deontological tendencies can be jerks sometimes, and utilitarianism fights that.
If I understand it correctly, one suggestion is equivalent to choosing some X, and re-scaling everyone’s utility function so that X has value 1. Obvious problem is the arbitrary choice of X, and the fact that in some people’s original scale X may have positive, negative, or zero value.
The other suggestion is equivalent to choosing a hypothetical person P with infinite empathy towards all people, and using the utility function of P as absolute utility. I am not sure about this, but seems to me that the result depends on P’s own preferences, and this cannot be fixed because without preferences there could be no empathy.
And yet the problem is commonly ignored routinely and nonchalantly, even here, where people pride themselves on fearless and consistent reductionism.
Yes. To be honest it looks like local version of reductionism takes the ‘everything is reducible’ in declarative sense, declaring that concepts it uses are reducible regardless of their reducibility.
Thanks! That’s spot on. It’s what I think much of those ‘utility functions’ here are. Number of paperclips in the universe, too. Haven’t seen anything like that reduced to formal definition of any kind.
The way humans actually decide on actions, is by evaluating the world-difference that the action causes in world-model, everything being very partial depending to available time. The probabilities are rarely possible to employ in the world model because of the combinatorial space exploding real hard. (also, Bayesian propagation on arbitrary graphs is np-complete, in very practical way of being computationally expensive). Hence there isn’t some utility function deep inside governing the choices. Doing the best is mostly about putting limited computing time to best use.
Then there’s some odd use of abstractions—like, every agent can be represented with utility function therefore whatever we talk about utilities is relevant. Never mind that this utility function is trivial 1 for doing what agent chooses 0 otherwise and everything just gets tautological.
(If someone believes that there is a way how these interpersonally comparable utilities could actually be grounded in physical reality, I’d be extremely curious to hear it.)
I wonder if I am misunderstanding what you are asking, because interpersonal utility comparison seems like an easy thing that people do every day, using our inborn systems for sympathy and empathy.
When I am trying to make a decision that involves the conflicting desires of myself and another person; I generally use empathy to put myself in their shoes and try to think about desires that I have that are probably similar to theirs. Then I compare how strong those two desires of mine are and base my decision on that. Now, obviously I don’t make all ethical decisions like that, there are many where I just follow common rules of thumb. But I do make some decisions in this fashion, and it seems quite workable, the more fair-minded of my acquaintances don’t really complain about it unless they think I’ve made a mistake. Obviously it has scaling problems when attempting to base any type of utilitarian ethics on it, but I don’t think they are insurmountable.
Now, of course you could object that this method is unreliable, and ask whether I really know for sure if other people’s desires are that similar to mine. But this seems to me to just be a variant of the age-old problem of skepticism and doesn’t really deserve any more attention than the possibility that all the people I meet are illusions created by an evil demon. It’s infinitesimally possible that everyone I know doesn’t really have mental states similar to mine at all, that in fact they are all really robot drones controlled by a non-conscious AI that is basing their behavior on a giant lookup table. But it seems much more likely that other people are conscious human beings with mental states similar to mine that can be modeled and compared via empathy, and that this allows me to compare their utilities.
In fact, it’s hard to understand how empathy and sympathy could have evolved if it they weren’t reasonably good at interpersonal utility comparison. If interpersonal utility comparison was truly impossible then anyone who tried to use empathy to inform their behavior towards others would end up being disastrously wrong at figuring out how to properly treat others, find themselves grievously offending the rest of their tribe, and would hence likely have their genes for empathy selected against. It seems like if interpersonal utility comparison was impossible humans would have never evolved the ability or desire to make decisions based on empathy.
I am also curious as to why you refer to to utility as “ghostly.” It seems to me that utility is commonly defined as the sum of the various desires and feelings that people have. Desires and feelings are computations and other processes in our brains, which are very solid real physical objects. So it seems like utility is at least as real as software. Of course, it’s entirely possible that you are using the word “utility” to refer to a slightly different concept than I am and that is where my confusion is coming from.
This, in my opinion, is by itself a decisive argument against utilitarianism.
You mean against preference-utilitarianism.
The vast majority of utilitarians I know are hedonistic utilitarians, where this criticism doesn’t apply at all. (For some reason LW seems to be totally focused on preference-utilitarianism, as I’ve noticed by now.) As for the criticism itself: I agree! Preference-utiltiarians can come up with sensible estimates and intuitive judgements, but when you actually try to show that in theory there is one right answer, you just find a huge mess.
I agree. I’m fairly confident that, within the next several decades, we will have the technology to accurately measure and sum hedons and that hedonic utilitarianism can escape the conceptual problems inherent in preference utilitarianism. On the other hand, I do not want to maximize (my) hedons (for these kinds of reasons, among others).
we will have the technology to accurately measure and sum hedons
Err...what? Technology will tell you things about how brains (and computer programs) vary, but not which differences to count as “more pleasure” or “less pleasure.” If evaluations of pleasure happen over 10x as many neurons is there 10x as much pleasure? Or is it the causal-functional role pleasure plays in determining the behavior of a body? What if we connect many brains or programs to different sorts of virtual bodies? Probabilistically?
A rule to get a cardinal measure of pleasure across brains is going to require almost as much specification as a broader preference measure. Dualists can think of this as guesstimating “psychophysical laws” and physicalists can think of it as seeking reflective equilibrium in our stances towards different physical systems, but it’s not going to be “read out” of neuroscience without deciding a bunch of evaluative (or philosophy of mind) questions.
but it’s not going to be “read out” of neuroscience without deciding a bunch of evaluative (or philosophy of mind) questions.
Sure, but I don’t think we can predict that there will be a lot of room for deciding those philosophy of mind questions whichever way one wants to. One simply has to wait for the research results to come in. With more data to constrain the interpretations, the number and spread of plausible stable reflective equilibria might be very small.
I agree with Jayson that it is not mandatory or wise to maximize hedons. And that is because hedons are not the only valuable things. But they do constitute one valuable category. And in seeking them, the total utilitarians are closer to the right approach than the average utilitarians (I will argue in a separate reply).
Is intrapersonal comparison possible? Personal boundaries don’t matter for hedonistic utilitarianism, they only matter insofar as you may have spatio-temporally connected clusters of hedons (lives). The difficulties in comparison seem to be of an empirical nature, not a fundamental one (unlike the problems with preference-utilitarianism). If we had a good enough theory of consciousness, we could quantitatively describe the possible states of consciousness and their hedonic tones. Or not?
One common argument against hedonistic utiltiarianism is that there are “different kinds of pleasures”, and that they are “incommensurable”. But if that we’re the case, it would be irrational to accept a trade-off of the lowest pleasure of one sort for the highest pleasure of another sort, and no one would actually claim that. So even if pleasures “differ in kind”, there’d be an empirical trade-off value based on how pleasant the hedonic states actually are.
Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we’d be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.
You make it sound as if there is some signal or register in the brain whose value represents “pleasure” in a straightforward way. To me it seems much more plausible that “pleasure” reduces to a multitude of variables that can’t be aggregated into a single-number index except through some arbitrary convention. This seems to me likely even within a single human mind, let alone when different minds (especially of different species) are compared.
That said, I do agree that the foundation of pure hedonic utilitarianism is not as obviously flawed as that of preference utilitarianism. The main problem I see with it is that it implies wireheading as the optimal outcome.
The main problem I see with it is that it implies wireheading as the optimal outcome.
Or the utilitronium shockwave, rather. Which doesn’t even require minds to wirehead anymore, but simply converts matter into maximally efficient bliss simulations. I used to find this highly counterintuitive, but after thinking about all the absurd implications of valuing preferences instead of actual states of the world, I’ve come to think of it as a perfectly reasonable thing.
The main problem I see with it is that it implies wireheading as the optimal outcome.
AFAICT, it only does so if we assume that the environment can somehow be relied upon to maintain the wireheading environment optimally even though everyone is wireheading.
Failing that assumption, it seems preferable (even under pure hedonic utilitarianism) for some fraction of total experience to be non-wireheading, but instead devoted to maintaining and improving the wireheading environment. (Indeed, it might even be preferable for that fraction to approach 100%, depending on the specifics of the environment..)
I suspect that, if that assumption were somehow true, and we somehow knew it was true (I have trouble imagining either scenario, but OK), most humans would willingly wirehead.
Right, if you cannot compare utilities, you are safe from the repugnant conclusion.
On the other hand, this is not very useful instrumentally, as a functioning society necessarily requires arbitration of individual wants. Thus some utilities must be comparable, even if others might not be. Finding a boundary between the two runs into the standard problem of two nearly identical preferences being qualitatively different.
Yes but it doesn’t have the problem Vladimir_M described above, and it can bite the bullet in the repugnant conclusion by appealing to personal identity being an illusion. Total hedonistic utilitarianism is quite hard to argue against, actually.
As I mentioned in the other reply, I’m not sure how a society of total hedonistic utilitarians would function without running into the issue of nearly identical but incommensurate preferences.
Hedonistic utilitarianism is not about preferences at all. It’s about maximizing happiness, whatever the reason or substrate for it. The utilitronium shockwave would be the best scenario for total hedonistic utilitarianism.
No, nothing of that sort. You just take the surplus of positive hedonic states over negative ones and try to maximize that. Interpersonal boundaries become irrelevant, in fact many hedonistic utilitarians think that the concept of personal identity is an illusion anyway. If you consider utility functions, then that’s preference utilitarianism or something else entirely.
Utilons aren’t hedons. You have one simple utility function that states you should maximize happiness minus suffering. That’s similar to maximizing paperclips, and it avoids the problems discussed above that preference utiltiarianism has, namely how interpersonally differing utility functions should be compared to each other.
You still seem to be claiming that (a) you can calculate a number for hedons (b) you can do arithmetic on this number. This seems problematic to me for the same reason as doing these things for utilons. How do you actually do (a) or (b)? What is the evidence that this works in practice?
I don’t claim that I, or anyone else, can do that right now. I’m saying there doesn’t seem to be a fundamental reason why that would have to remain impossible forever. Why do you think it will remain impossible forever?
As for (b), I don’t even see the problem. If (a) works, then you just do simple math after that. In case you’re worried about torture and dust specks not working out, check out section VI of this paper.
And regarding (a), here’s an example that approximates the kind of solutions we seek: In anti-depression drug tests, the groups with the actual drug and the control group have to fill out self-assessments of their subjective experiences, and at the same time their brain activity and behavior is observed. The self-reports correlate with the physical data.
I can’t speak for David (or, well, I can’t speak for that David), but for my own part, I’m willing to accept for the sake of argument that the happiness/suffering/whatever of individual minds is intersubjectively commensurable, just like I’m willing to accept for the sake of argument that people have “terminal values” which express what they really value, or that there exist “utilons” that are consistently evaluated across all situations, or a variety of other claims, despite having no evidence that any such things actually exist. I’m also willing to assume spherical cows, frictionless pulleys, and perfect vacuums for the sake of argument.
But the thing about accepting a claim for the sake of argument is that the argument I’m accepting it for the sake of has to have some payoff that makes accepting it worthwhile. As far as I can tell, the only payoff here is that it lets us conclude “hedonic utilitarianism is better than all other moral philosophies.” To me, that payoff doesn’t seem worth the bullet you’re biting by assuming the existence of intersubjectively commensurable hedons.
The self-reports correlate with the physical data.
If someone were to demonstrate a scanning device whose output could be used to calculate a “hedonic score” for a given brain across a wide range of real-world brains and brainstates without first being calibrated against that brain’s reference class, and that hedonic score could be used to reliably predict the self-reports of that brain’s happiness in a given moment, I would be surprised and would change my mind about both the degree of variation of cognitive experience and the viability of intersubjectively commensurable hedons.
If you’re claiming this has actually been demonstrated, I’d love to see the study; everything I’ve ever read about has been significantly narrower than that.
If you’re merely claiming that it’s in principle possible that we live in a world where this could be demonstrated, I agree that it’s in principle possible, but see no particular evidence to support the claim that we do.
If you’re merely claiming that it’s in principle possible that we live in a world where this could be demonstrated, I agree that it’s in principle possible, but see no particular evidence to support the claim that we do.
Well, yes. The main attraction of utilitarianism appears to be that it makes the calculation of what to do easier. But its assumptions appear ungrounded.
But what makes you think you can just do simple math on the results? And which simple math—addition, adding the logarithms, taking the average or what? What adds up to normality?
Thanks for the link. I still cannot figure out why utilons are not convertible to hedons, and even if they aren’t, why isn’t a mixed utilon/hedon maximizer susceptible to dutch booking. Maybe I’ll look through the logic again.
Hedonism doesn’t specify what sorts of brain states and physical objects have how much pleasure. There are a bewildering variety of choices to be made in cashing out a rule to classify which systems are how “happy.” Just to get started, how much pleasure is there when a computer running simulations of happy human brains is sliced in the ways discussed in this paper?
But aren’t those empirical difficulties, not fundamental ones? Don’t you think there’s a fact of the matter that will be discovered if we keep gaining more and more knowledge? Empirical problems can’t bring down an ethical theory, but if you can show that there exists a fundamental weighting problem, then that would be valid criticism.
But aren’t those empirical difficulties, not fundamental ones?
What sort of empirical fact would you discover that would resolve that? A detector for happiness radiation? The scenario in that paper is pretty well specified.
This, in my opinion, is by itself a decisive argument against utilitarianism. Without these ghostly “utilities” that are supposed to be measurable and comparable interpersonally, the whole concept doesn’t even being to make sense. And yet the problem is commonly ignored routinely and nonchalantly, even here, where people pride themselves on fearless and consistent reductionism.
Note that the problem is much more fundamental than just the mathematical difficulties and counter-intuitive implications of formal utilitarian theories. Even if there were no such problems, it would still be the case that the whole theory rests on an entirely imaginary foundation. Ultimately, it’s a system that postulates some metaphysical entities and a categorical moral imperative stated in terms of the supposed state of these entities. Why would we privilege that over systems that postulate metaphysical entities and associated categorical imperatives of different kinds, like e.g. traditional religions?
(If someone believes that there is a way how these interpersonally comparable utilities could actually be grounded in physical reality, I’d be extremely curious to hear it.)
I asked about this before in the context of one of Julia Galef’s posts about utilitarian puzzles and received several responses. What is your evaluation of the responses (personally, I was very underwhelmed)?
The only reasonable attempt at a response in that sub-thread is this comment. I don’t think the argument works, though. The problem is not just disagreement between different people’s intuitions, but also the fact that humans don’t do anything like utility comparisons when it comes to decisions that affect other people. What people do in reality is intuitive folk ethics, which is basically virtue ethics, and has very little concern with utility comparisons.
That said, there are indeed some intuitions about utility comparison, but they are far too weak, underspecified, and inconsistent to serve as basis for extracting an interpersonal utility function, even if we ignore disagreements between people.
Intuitive utilitarian ethics are very helpful in everyday life.
There is the oft-repeated anecdote of the utilitarian moral philosopher weighing up whether to accept a job at Columbia. It would get more money, but it would uproot his family, but it might help his career… familiar kind of moral dilemma. Asking his colleague for advice, he got told “Just maximise total utility.” “Come on,” he is supposed to have replied, “this is serious!”
I struggle to think of any moral dilemma I have faced where utilitarian ethics even provide a practical framework for addressing the problem, let alone a potential answer.
Sauce: http://lesswrong.com/lw/890/rationality_quotes_november_2011/5aq7
That anecdote is about a decision theorist, not a moral philosopher. The dilemma you describe is a decision theoretic one, not a moral utilitarian one.
Writing out costs and benefits is a technique that is sometimes helpful.
Sure, but “costs” and “benefits” are themselves value-laden terms, which depend on the ethical framework you are using. And then comparing the costs and the benefits is itself value-laden.
In other words, people using non-utilitarian ethics can get plenty of value out of writing down costs and benefits. And people using utilitarian ethics don’t necessarily get much value (doesn’t really help the philosopher in the anecdote). This is therefore not an example of how utilitarian ethics are useful.
Writing down costs and benefits is clearly an application of consequentialist ethics, unless things are so muddied that any action might be an example of any ethic. Consequentialist ethics need not be utilitarian, true, but they are usually pretty close to utilitarian. Certainly closer to utilitarianism than to virtue ethics.
No, because “costs” and “benefits” are value-laden terms.
Suppose I am facing a standard moral dilemma; should I give my brother proper funerary rites, even though the city’s ruler has forbidden it. So I take your advice and write down costs and benefits. Costs—breaching my duty to obey the law, punishment for me, possible reigniting of the city’s civil war. Benefits—upholding my duty to my family, proper funeral rites for my brother, restored honour. By writing this down I haven’t committed to any ethical system, all I’ve done is clarify what’s at stake. For example, if I’m a deontologist, perhaps this helps clarify that it comes down to duty to the law versus duty to my family. If I’m a virtue ethicist, perhaps this shows it’s about whether I want to be the kind of person who is loyal to their family above tawdry concerns of politics, or the kind of person who is willing to put their city above petty personal concerns. This even works if I’m just an egoist with no ethics; is the suffering of being imprisoned in a cave greater or less than the suffering I’ll experience knowing my brother’s corpse is being eaten by crows?
Ironically, the only person this doesn’t help is the utilitarian, because he has absolutely no way of comparing the costs and the benefits—“maximise utility” is a slogan, not a procedure.
What are you arguing here? First you argue that “just maximize utility” is not enough to make a decision. This is of course true, since utilitarianism is not a fully specified theory. There are many different utilitarian systems of ethics, just as there are many different deontological ethics and many different egoist ethics.
Second you are arguing that working out the costs and benefits is not an indicator of consequentialism. Perhaps this is not perfectly true, but if you follow these arguments to their conclusion then basically nothing is an indicator of any ethical system. Writing a list of costs and benefits, as these terms are usually understood, focuses one’s attention on the consequences of the action rather than the reasons for the action (as the virtue ethicists care about) or the rules mandating or forbidding an action (as the deontologists care about). Yes, the users of different ethical theories can use pretty much any tool to help them decide, but some tools are more useful for some theories because they push your thinking into the directions that theory considers relevant.
Are you arguing anything else?
Could you provide some concrete examples?
I am thinking about petty personal disputes, say if one person finds something that another person does annoying. A common gut reaction is to immediately start staking territory about what is just and what is virtuous and so on, while the correct thing to do is focus on concrete benefits and costs of actions. The main reason this is better is not because it maximizes utility but because it minimizes argumentativeness.
Another good example is competition for a resource. Sometimes one feels like one deserves a fair share and this is very important, but if you have no special need for it, nor are there significant diminishing marginal returns, then it’s really not that big of a deal.
In general, intuitive deontological tendencies can be jerks sometimes, and utilitarianism fights that.
http://lesswrong.com/lw/b4f/sotw_check_consequentialism/
Thanks for the link, I am very underwhelmed too.
If I understand it correctly, one suggestion is equivalent to choosing some X, and re-scaling everyone’s utility function so that X has value 1. Obvious problem is the arbitrary choice of X, and the fact that in some people’s original scale X may have positive, negative, or zero value.
The other suggestion is equivalent to choosing a hypothetical person P with infinite empathy towards all people, and using the utility function of P as absolute utility. I am not sure about this, but seems to me that the result depends on P’s own preferences, and this cannot be fixed because without preferences there could be no empathy.
Yes. To be honest it looks like local version of reductionism takes the ‘everything is reducible’ in declarative sense, declaring that concepts it uses are reducible regardless of their reducibility.
Greedy reductionism.
Thanks! That’s spot on. It’s what I think much of those ‘utility functions’ here are. Number of paperclips in the universe, too. Haven’t seen anything like that reduced to formal definition of any kind.
The way humans actually decide on actions, is by evaluating the world-difference that the action causes in world-model, everything being very partial depending to available time. The probabilities are rarely possible to employ in the world model because of the combinatorial space exploding real hard. (also, Bayesian propagation on arbitrary graphs is np-complete, in very practical way of being computationally expensive). Hence there isn’t some utility function deep inside governing the choices. Doing the best is mostly about putting limited computing time to best use.
Then there’s some odd use of abstractions—like, every agent can be represented with utility function therefore whatever we talk about utilities is relevant. Never mind that this utility function is trivial 1 for doing what agent chooses 0 otherwise and everything just gets tautological.
I wonder if I am misunderstanding what you are asking, because interpersonal utility comparison seems like an easy thing that people do every day, using our inborn systems for sympathy and empathy.
When I am trying to make a decision that involves the conflicting desires of myself and another person; I generally use empathy to put myself in their shoes and try to think about desires that I have that are probably similar to theirs. Then I compare how strong those two desires of mine are and base my decision on that. Now, obviously I don’t make all ethical decisions like that, there are many where I just follow common rules of thumb. But I do make some decisions in this fashion, and it seems quite workable, the more fair-minded of my acquaintances don’t really complain about it unless they think I’ve made a mistake. Obviously it has scaling problems when attempting to base any type of utilitarian ethics on it, but I don’t think they are insurmountable.
Now, of course you could object that this method is unreliable, and ask whether I really know for sure if other people’s desires are that similar to mine. But this seems to me to just be a variant of the age-old problem of skepticism and doesn’t really deserve any more attention than the possibility that all the people I meet are illusions created by an evil demon. It’s infinitesimally possible that everyone I know doesn’t really have mental states similar to mine at all, that in fact they are all really robot drones controlled by a non-conscious AI that is basing their behavior on a giant lookup table. But it seems much more likely that other people are conscious human beings with mental states similar to mine that can be modeled and compared via empathy, and that this allows me to compare their utilities.
In fact, it’s hard to understand how empathy and sympathy could have evolved if it they weren’t reasonably good at interpersonal utility comparison. If interpersonal utility comparison was truly impossible then anyone who tried to use empathy to inform their behavior towards others would end up being disastrously wrong at figuring out how to properly treat others, find themselves grievously offending the rest of their tribe, and would hence likely have their genes for empathy selected against. It seems like if interpersonal utility comparison was impossible humans would have never evolved the ability or desire to make decisions based on empathy.
I am also curious as to why you refer to to utility as “ghostly.” It seems to me that utility is commonly defined as the sum of the various desires and feelings that people have. Desires and feelings are computations and other processes in our brains, which are very solid real physical objects. So it seems like utility is at least as real as software. Of course, it’s entirely possible that you are using the word “utility” to refer to a slightly different concept than I am and that is where my confusion is coming from.
You mean against preference-utilitarianism.
The vast majority of utilitarians I know are hedonistic utilitarians, where this criticism doesn’t apply at all. (For some reason LW seems to be totally focused on preference-utilitarianism, as I’ve noticed by now.) As for the criticism itself: I agree! Preference-utiltiarians can come up with sensible estimates and intuitive judgements, but when you actually try to show that in theory there is one right answer, you just find a huge mess.
I agree. I’m fairly confident that, within the next several decades, we will have the technology to accurately measure and sum hedons and that hedonic utilitarianism can escape the conceptual problems inherent in preference utilitarianism. On the other hand, I do not want to maximize (my) hedons (for these kinds of reasons, among others).
Err...what? Technology will tell you things about how brains (and computer programs) vary, but not which differences to count as “more pleasure” or “less pleasure.” If evaluations of pleasure happen over 10x as many neurons is there 10x as much pleasure? Or is it the causal-functional role pleasure plays in determining the behavior of a body? What if we connect many brains or programs to different sorts of virtual bodies? Probabilistically?
A rule to get a cardinal measure of pleasure across brains is going to require almost as much specification as a broader preference measure. Dualists can think of this as guesstimating “psychophysical laws” and physicalists can think of it as seeking reflective equilibrium in our stances towards different physical systems, but it’s not going to be “read out” of neuroscience without deciding a bunch of evaluative (or philosophy of mind) questions.
Sure, but I don’t think we can predict that there will be a lot of room for deciding those philosophy of mind questions whichever way one wants to. One simply has to wait for the research results to come in. With more data to constrain the interpretations, the number and spread of plausible stable reflective equilibria might be very small.
I agree with Jayson that it is not mandatory or wise to maximize hedons. And that is because hedons are not the only valuable things. But they do constitute one valuable category. And in seeking them, the total utilitarians are closer to the right approach than the average utilitarians (I will argue in a separate reply).
OK, I’ve got to ask: what’s your confidence based in, in detail? It’s not clear to me that “sum hedons” even means anything.
Why do you believe that interpersonal comparison of pleasure is straightforward? To me this doesn’t seem to be the case.
Is intrapersonal comparison possible? Personal boundaries don’t matter for hedonistic utilitarianism, they only matter insofar as you may have spatio-temporally connected clusters of hedons (lives). The difficulties in comparison seem to be of an empirical nature, not a fundamental one (unlike the problems with preference-utilitarianism). If we had a good enough theory of consciousness, we could quantitatively describe the possible states of consciousness and their hedonic tones. Or not?
One common argument against hedonistic utiltiarianism is that there are “different kinds of pleasures”, and that they are “incommensurable”. But if that we’re the case, it would be irrational to accept a trade-off of the lowest pleasure of one sort for the highest pleasure of another sort, and no one would actually claim that. So even if pleasures “differ in kind”, there’d be an empirical trade-off value based on how pleasant the hedonic states actually are.
Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we’d be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.
You make it sound as if there is some signal or register in the brain whose value represents “pleasure” in a straightforward way. To me it seems much more plausible that “pleasure” reduces to a multitude of variables that can’t be aggregated into a single-number index except through some arbitrary convention. This seems to me likely even within a single human mind, let alone when different minds (especially of different species) are compared.
That said, I do agree that the foundation of pure hedonic utilitarianism is not as obviously flawed as that of preference utilitarianism. The main problem I see with it is that it implies wireheading as the optimal outcome.
Or the utilitronium shockwave, rather. Which doesn’t even require minds to wirehead anymore, but simply converts matter into maximally efficient bliss simulations. I used to find this highly counterintuitive, but after thinking about all the absurd implications of valuing preferences instead of actual states of the world, I’ve come to think of it as a perfectly reasonable thing.
AFAICT, it only does so if we assume that the environment can somehow be relied upon to maintain the wireheading environment optimally even though everyone is wireheading.
Failing that assumption, it seems preferable (even under pure hedonic utilitarianism) for some fraction of total experience to be non-wireheading, but instead devoted to maintaining and improving the wireheading environment. (Indeed, it might even be preferable for that fraction to approach 100%, depending on the specifics of the environment..)
I suspect that, if that assumption were somehow true, and we somehow knew it was true (I have trouble imagining either scenario, but OK), most humans would willingly wirehead.
Hedonistic utilitarianism (“what matters is the aggregate happiness”) runs into the same repugnant conclusion.
But this happens exactly because interpersonal (hedonistic) utility comparison is possible.
Right, if you cannot compare utilities, you are safe from the repugnant conclusion.
On the other hand, this is not very useful instrumentally, as a functioning society necessarily requires arbitration of individual wants. Thus some utilities must be comparable, even if others might not be. Finding a boundary between the two runs into the standard problem of two nearly identical preferences being qualitatively different.
Yes but it doesn’t have the problem Vladimir_M described above, and it can bite the bullet in the repugnant conclusion by appealing to personal identity being an illusion. Total hedonistic utilitarianism is quite hard to argue against, actually.
As I mentioned in the other reply, I’m not sure how a society of total hedonistic utilitarians would function without running into the issue of nearly identical but incommensurate preferences.
Hedonistic utilitarianism is not about preferences at all. It’s about maximizing happiness, whatever the reason or substrate for it. The utilitronium shockwave would be the best scenario for total hedonistic utilitarianism.
Maybe I misunderstand how total hedonistic utilitarianism works. Don’t you ever construct an aggregate utility function?
No, nothing of that sort. You just take the surplus of positive hedonic states over negative ones and try to maximize that. Interpersonal boundaries become irrelevant, in fact many hedonistic utilitarians think that the concept of personal identity is an illusion anyway. If you consider utility functions, then that’s preference utilitarianism or something else entirely.
How is that not an aggregate utility function?
Utilons aren’t hedons. You have one simple utility function that states you should maximize happiness minus suffering. That’s similar to maximizing paperclips, and it avoids the problems discussed above that preference utiltiarianism has, namely how interpersonally differing utility functions should be compared to each other.
You still seem to be claiming that (a) you can calculate a number for hedons (b) you can do arithmetic on this number. This seems problematic to me for the same reason as doing these things for utilons. How do you actually do (a) or (b)? What is the evidence that this works in practice?
I don’t claim that I, or anyone else, can do that right now. I’m saying there doesn’t seem to be a fundamental reason why that would have to remain impossible forever. Why do you think it will remain impossible forever?
As for (b), I don’t even see the problem. If (a) works, then you just do simple math after that. In case you’re worried about torture and dust specks not working out, check out section VI of this paper.
And regarding (a), here’s an example that approximates the kind of solutions we seek: In anti-depression drug tests, the groups with the actual drug and the control group have to fill out self-assessments of their subjective experiences, and at the same time their brain activity and behavior is observed. The self-reports correlate with the physical data.
I can’t speak for David (or, well, I can’t speak for that David), but for my own part, I’m willing to accept for the sake of argument that the happiness/suffering/whatever of individual minds is intersubjectively commensurable, just like I’m willing to accept for the sake of argument that people have “terminal values” which express what they really value, or that there exist “utilons” that are consistently evaluated across all situations, or a variety of other claims, despite having no evidence that any such things actually exist. I’m also willing to assume spherical cows, frictionless pulleys, and perfect vacuums for the sake of argument.
But the thing about accepting a claim for the sake of argument is that the argument I’m accepting it for the sake of has to have some payoff that makes accepting it worthwhile. As far as I can tell, the only payoff here is that it lets us conclude “hedonic utilitarianism is better than all other moral philosophies.” To me, that payoff doesn’t seem worth the bullet you’re biting by assuming the existence of intersubjectively commensurable hedons.
If someone were to demonstrate a scanning device whose output could be used to calculate a “hedonic score” for a given brain across a wide range of real-world brains and brainstates without first being calibrated against that brain’s reference class, and that hedonic score could be used to reliably predict the self-reports of that brain’s happiness in a given moment, I would be surprised and would change my mind about both the degree of variation of cognitive experience and the viability of intersubjectively commensurable hedons.
If you’re claiming this has actually been demonstrated, I’d love to see the study; everything I’ve ever read about has been significantly narrower than that.
If you’re merely claiming that it’s in principle possible that we live in a world where this could be demonstrated, I agree that it’s in principle possible, but see no particular evidence to support the claim that we do.
Well, yes. The main attraction of utilitarianism appears to be that it makes the calculation of what to do easier. But its assumptions appear ungrounded.
But what makes you think you can just do simple math on the results? And which simple math—addition, adding the logarithms, taking the average or what? What adds up to normality?
Thanks for the link. I still cannot figure out why utilons are not convertible to hedons, and even if they aren’t, why isn’t a mixed utilon/hedon maximizer susceptible to dutch booking. Maybe I’ll look through the logic again.
Hedonism doesn’t specify what sorts of brain states and physical objects have how much pleasure. There are a bewildering variety of choices to be made in cashing out a rule to classify which systems are how “happy.” Just to get started, how much pleasure is there when a computer running simulations of happy human brains is sliced in the ways discussed in this paper?
But aren’t those empirical difficulties, not fundamental ones? Don’t you think there’s a fact of the matter that will be discovered if we keep gaining more and more knowledge? Empirical problems can’t bring down an ethical theory, but if you can show that there exists a fundamental weighting problem, then that would be valid criticism.
What sort of empirical fact would you discover that would resolve that? A detector for happiness radiation? The scenario in that paper is pretty well specified.