Pinpointing Utility
Following Morality is Awesome. Related: Logical Pinpointing, VNM.
The eternal question, with a quantitative edge: A wizard has turned you into a whale, how awesome is this?
“10.3 Awesomes”
Meditate on this: What does that mean? Does that mean it’s desirable? What does that tell us about how awesome it is to be turned into a whale? Explain. Take a crack at it for real. What does it mean for something to be labeled as a certain amount of “awesome” or “good” or “utility”?
What is This Utility Stuff?
Most of agree that the VNM axioms are reasonable, and that they imply that we should be maximizing this stuff called “expected utility”. We know that expectation is just a weighted average, but what’s this “utility” stuff?
Well, to start with, it’s a logical concept, which means we need to pin it down with the axioms that define it. For the moment, I’m going to conflate utility and expected utility for simplicity’s sake. Bear with me. Here are the conditions that are necessary and sufficient to be talking about utility:
Utility can be represented as a single real number.
Each outcome has a utility.
The utility of a probability distribution over outcomes is the expected utility.
The action that results in the highest utility is preferred.
No other operations are defined.
I hope that wasn’t too esoteric. The rest of this post will be explaining the implications of those statements. Let’s see how they apply to the awesomeness of being turned into a whale:
“10.3 Awesomes” is a real number.
We are talking about the outcome where “A wizard has turned you into a whale”.
There are no other outcomes to aggregate with, but that’s OK.
There are no actions under consideration, but that’s OK.
Oh. Not even taking the value?
Note 5 especially. You can probably look at the number without causing trouble, but if you try to treat it as meaningful for something other than condition 3 and 4, even accidentally, that’s a type error.
Unfortunately, you do not have a finicky compiler that will halt and warn you if you break the rules. Instead, your error will be silently ignored, and you will go on, blissfully unaware that the invariants in your decision system no longer pinpoint VNM utility. (Uh oh.)
Unshielded Utilities, and Cautions for UtilityUsers
Let’s imagine that utilities are radioactive; If we are careful with out containment procedures, we can safely combine and compare them, but if we interact with an unshielded utility, it’s over, we’ve committed a type error.
To even get a utility to manifest itself in this plane, we have to do a little ritual. We have to take the ratio between two utility differences. For example, if we want to get a number for the utility of being turned into a whale for a day, we might take the difference between that scenario and what we would otherwise expect to do, and then take the ratio between that difference and the difference between a normal day and a day where we also get a tasty sandwich. (Make sure you take the absolute value of your unit, or you will reverse your utility function, which is a bad idea.)
So the form that the utility of being a whale manifests as might be “500 tasty sandwiches better than a normal day”. We have chosen “a normal day” for our datum, and “tasty sandwiches” for our units. Of course we could have just as easily chosen something else, like “being turned into a whale” as our datum, and “orgasms” for our units. Then it would be “0 orgasms better than being turned into a whale”, and a normal day would be “400 orgasms from the whaleday”.
You say: “But you shouldn’t define your utility like that, because then you are experiencing huge disutility in the normal case.”
Wrong, and radiation poisoning, and type error. You tried to “experience” a utility, which is not in the defined operations. Also, you looked directly at the value of an unshielded utility (also known as numerology).
We summoned the utilities into the real numbers, but they are still utilities, and we still can only compare and aggregate them. The summoning only gives us a number that we can numerically do those operations on, which is why we did it. This is the same situation as time, position, velocity, etc, where we have to select units and datums to get actual quantities that mathematically behave like their ideal counterparts.
Sometimes people refer to this relativity of utilities as “positive affine structure” or “invariant up to a scale and shift”, which confuses me by making me think of an equivalence class of utility functions with numbers coming out, which don’t agree on the actual numbers, but can be made to agree with a linear transform, rather than making me think of a utility function as a space I can measure distances in. I’m an engineer, not a mathematician, so I find it much more intuitive and less confusing to think of it in terms of units and datums, even though it’s basically the same thing. This way, the utility function can scale and shift all it wants, and my numbers will always be the same. Equivalently, all agents that share my preferences will always agree that a day as a whale is “400 orgasms better than a normal day”, even if they use another basis themselves.
So what does it mean that being a whale for a day is 400 orgasms better than a normal day? Does it mean I would prefer 400 orgasms to a day as a whale? Nope. Orgasms don’t add up like that; I’d probably be quite tired of it by 15. (remember that “orgasms” were defined as the difference between a day without an orgasm and a day with one, not as the utility of a marginal orgasm in general.) What it means is that I’d be indifferent between a normal day with a ^{1}⁄_{400} chance of being a whale, and a normal day with guaranteed extra orgasm.
That is, utilities are fundamentally about how your preferences react to uncertainty. For example, You don’t have to think that each marginal year of life is as valuable as the last, if you don’t think you should take a gamble that will double your remaining lifespan with 60% certainty and kill you otherwise. After all, all that such a utility assignment even means is that you would take such a gamble. In the words of VNM:
We have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate.
But suppose there are very good arguments that have nothing to do with uncertainty for why you should value each marginal lifeyear as much as the last. What then?
Well, “what then” is that we spend a few weeks in the hospital dying of radiation poisoning, because we tried to interact with an unshielded utility again (utilities are radioactive, remember? The specific error is that we tried to manipulate the utility function with something other than comparison and aggregation. Touching a utility directly is just as much an error as observing it directly.
But if the only way to define your utility function is with thought experiments about what gambles you would take, and the only use for it is deciding what gambles you would take, then isn’t it doing no work as a concept?
The answer is no, but this is a good question because it gets us closer to what exactly this utility function stuff is about. The utility of utility is that defining how you would behave in one gamble puts a constraint on how you would behave in some other related gambles. As with all math, we put in some known facts, and then use the rules to derive some interesting but unknown facts.
For example, if we have decided that we would be indifferent between a tasty sandwich and a ^{1}⁄_{500} chance of being a whale for tomorrow, and that we’d be indifferent between a tasty sandwich and a 30% chance of sun instead of the usual rain, then we should also be indifferent between a certain sunny day and a ^{1}⁄_{150} chance of being a whale.
Monolithicness and Marginal (In)Dependence
If you are really paying attention, you may be a bit confused, because it seems to you that money or time or some other consumable resource can force you to assign utilities even if there is no uncertainty in the system. That issue is complex enough to deserve its own post, so I’d like to delay it for now.
Part of the solution is that as we defined them, utilities are monolithic. This is the implication of “each outcome has a utility”. What this means is that you can’t add and recombine utilities by decomposing and recombining outcomes. Being specific, you can’t take a marginal whale from one outcome and staple it onto another outcome, and expect the marginal utilities to be the same. For example, maybe the other outcome has no oceans for your marginal whale.
For a bigger example, what we have said so far about the relative value of sandwiches and sunny days and whaledays does not necessarily imply that we are indifferent between a ^{1}⁄_{250} chance of being a whale and any of the following:

A day with two tasty sandwiches. (Remember that a tasty sandwich was defined as a specific difference, not a marginal sandwich in general, which has no reason to have a consistent marginal value.)

A day with a 30% chance of sun and a certain tasty sandwich. (Maybe the tasty sandwich and the sun at the same time is horrifying for some reason. Maybe someone drilled into you as a child that “bread in the sun” was bad bad bad.)

etc. You get the idea. Utilities are monolithic and fundamentally associated with particular outcomes, not marginal outcomepieces.
However, as in probability theory, where each possible outcome technically has its very own probability, in practice it is useful to talk about a concept of independence.
So for example, even though the axioms don’t guarantee in general that it will ever be the case, it may work out in practice that given some conditions, like there being nothing special about bread in the sun, and my happiness not being near saturation, the utility of a marginal tasty sandwich is independent of a marginal sunny day, meaning that sun+sandwich is as much better than just sun as just a sandwich is better than baseline, ultimately meaning that I am indifferent between {50%: sunny+sandwich; 50% baseline} and {50%: sunny; 50%: sandwich}, and other such bets. (We need a better solution for rendering probability distributions in prose).
Notice that the independence of marginal utilities can depend on conditions and that independence is with respect to some other variable, not a general property. The utility of a marginal tasty sandwich is not independent of whether I am hungry, for example.
There is a lot more to this independence thing (and linearity, and risk aversion, and so on), so it deserves its own post. For now, the point is that the monolithicness thing is fundamental, but in practice we can sometimes look inside the black box and talk about independent marginal utilities.
Dimensionless Utility
I liked this quote from the comments of Morality is Awesome:
Morality needs a concept of awfulness as well as awesomeness. In the depths of hell, good things are not an option and therefore not a consideration, but there are still choices to be made.
Let’s develop that second sentence a bit more. If all your options suck, what do you do? You still have to choose. So let’s imagine we are in the depths of hell and see what our theories have to say about it:
Day 78045. Satan has presented me with three options:
Go on a date with Satan Himself. This will involve romantically torturing souls together, subtly steering mortals towards selfdestruction, watching people get thrown into the lake of fire, and some very unsafe, very nonconsensual sex with the Adversary himself.
Satan’s court wizard will turn me into a whale and release me into the lake of fire, to roast slowly for the next month, kept alive by twisted black magic.
Wat do?
They all seem pretty bad, but “pretty bad” is not a utility. We could quantify paperclipping as a couple hundred billion lives lost. Being a whale in the lake of fire would be awful, but a bounded sort of awful. A month of endless horrible torture. The “date” is having to be on the giving end of what would more or less happen anyway, and then getting savaged by Satan. Still none of these are utilities.
Coming up with actual utility numbers for these in terms of tasty sandwiches and normal days is hard; it would be like measuring the microkelvin temperatures of your physics experiment with a Fahrenheit kitchen thermometer; in principle it might work, but it isn’t the best tool for the job. Instead, we’ll use a different scheme this time.
Engineers (and physicists?) sometimes transform problems into a dimensionless form that removes all redundant information from the problem. For example, for a heat conduction problem, we might define an isomorphic dimensionless temperature so that real temperatures between 78 and 305 C become dimensionless temperatures between 0 and 1. Transforming a problem into dimensionless form is nearly always helpful, often in really surprising ways. We can do this with utility too.
Back to depths of hell. The date with Satan is clearly the best option, so it gets dimensionless utility 1. The paperclipper gets 0. On that scale, I’d say roasting in the lake of fire is like 0.999 or so, but that might just be scope insensitivity. We’ll take it for now.
The advantages with this approach are:

The numbers are more intuitive. −5e12 QALYs, −1 QALY, and −50 QALYs from a normal day, or the equivalent in tasty sandwiches, just doesn’t have the same feeling of clarity as 0, 1 and .999. (For me at least. And yes I know those numbers don’t quite match.)

Not having to relate the problem quantities to faraway datums or drastically misappropriate units (tasty sandwiches for this problem) makes the numbers easier and more direct to come up with. Also we have to come up with less of them. The problem is selfcontained.

If defined right, the connection between probability and utility becomes extraclear. For example: What chance between a Satandate and a paperclipper would make me indifferent with a lakeoffirewhalemonth? 0.999! Unitless magic!

All confusing redundant information (like negative signs) are removed, which makes it harder to accidentally do numerology or commit a type error.

All redundant information is removed, which means you find many more similarities between problems. The value of this in general cannot be understated. Just look at the generalizations made about Reynolds number! “[vortex shedding] occurs for any fluid, size, and speed, provided that Re between ~40 and 10^3”. What! You can just say that in general? Magic! I haven’t actually done enough utility problems to know that we’ll find stuff like that but I trust dimensionless form.
Anyways, it seems that going on that date is what I ought to do. So did we need a concept of awfulness? Did it matter that all the options sucked? Nope; the decision was isomorphic in every way to choosing lunch between a BLT, a turkey club, and a handful of dirt.
There are some assumptions in that lunch bit, and it’s worth discussing. It seems counterintuitive or even wrong, to say that your decisionprocess faced with lunch should be the same as when faced with a decision in involving torture, rape, and paperclips. The latter seems somehow more important. Where does that come from? Is it right?
This may deserve a bigger discussion, but basically, if you have finite resources (thoughtpower, money, energy, stress) that are conserved or even related across decisions, you get coupling of “different” decisions in a way that we didn’t have here. Your intuitions are calibrated for that case. Once you have decoupled the decision by coming up with the actual candidate options. The depthsofhell decision and the lunch decision really are totally isomorphic. I’ll probably address this properly later, if I discuss instrumental utility of resources.
Anyways, once you put the problem in dimensionless form, a lot of decisions that seemed very different become almost the same, and a lot of details that seemed important or confusing just disappear. Bask in the clarifying power of a good abstraction.
Utility is Personal
So far we haven’t touched the issue of interpersonal utility. That’s because that topic isn’t actually about VNM utility! There was nothing in the axioms above about there being a utility for each {person, outcome} pair, only for each outcome.
It turns out that if you try to compare utilities between agents, you have to touch unshielded utilities, which means you get radiation poisoning and go to typetheory hell. Don’t try it.
And yet, it seems like we ought to care about what others prefer, and not just our own selfinterest. But it seems like that inside the utility function, in moral philosophy, not out here in decision theory.
VNM has nothing to say on the issue of utilitarianism besides the usual preferenceuncertainty interaction constraints, because VNM is about the preferences of a single agent. If that single agent cares about the preferences of other agents, that goes inside the utility function.
Conversely, because VNM utility is out here, axiomized for the sovereign preferences of a single agent, we don’t much expect it to show up in there, in a discussion if utilitarian preference aggregation. In fact, if we do encounter it in there, it’s probably a sign of a failed abstraction.
Living with Utility
Let’s go back to how much work utility does as a concept. I’ve spent the last few sections hammering on the work that utility does not do, so you may ask “It’s nice that utility theory can constrain our bets a bit, but do I really have to define my utility function by pinning down the relative utilities of every single possible outcome?”.
Sort of. You can take shortcuts. We can, for example, wonder all at once whether, for all possible worlds where such is possible, you are indifferent between saving n lives and {50%: saving 2*n; 50%: saving 0}.
If that seems reasonable and doesn’t break in any case you can think of, you might keep it around as heuristic in your adhoc utility function. But then maybe you find a counterexample where you don’t actually prefer the implications of such a rule. So you have to refine it a bit to respond to this new argument. This is OK; the math doesn’t want you to do things you don’t want to.
So you can save a lot of small thought experiments by doing the right big ones, like above, but the more sweeping of a generalization you make, the more probable it is that it contains an error. In fact, conceptspace is pretty huge, so trying to construct a utility function without inside information is going to take a while no matter how you approach it. Something like disassembling the algorithms that produce your intuitions would be much more efficient, but that’s probably beyond science right now.
In any case, in the current term before we figure out how to formally reason the whole thing out in advance, we have to get by with some good heuristics and our current intuitions with a pinch of last minute sanity checking against the VNM rules. Ugly, but better than nothing.
The whole project is made quite a bit harder in that we are not just trying to reconstruct an explicit utility function from revealed preference; we are trying to construct a utility function for a system that doesn’t even currently have consistent preferences.
At some point, either the concept of utility isn’t really improving our decisions, or it will come in conflict with our intuitive preferences. In some cases it’s obvious how to resolve the conflict, in others, not so much.
But if VNM contradicts our current preferences, why do we think it’s a good idea at all? Surely it’s not wise to be tampering with our very values?
The reason we like VNM is that we have a strong metaintuition that our preferences ought to be internally consistent, and VNM seems to be the only way to satisfy that. But it’s good to remember that this is just another intuition, to be weighed against the rest. Are we ironing out garbage inconsistencies, or losing valuable information?
At this point I’m dangerously out of my depth. As far as I can tell, the great project of moral philosophy is an adult problem, not suited for mere mortals like me. Besides, I’ve rambled long enough.
Conclusions
What a slog! Let’s review:

Maximize expected utility, where utility is just an encoding of your preferences that ensures a sane reaction to uncertainty.

Don’t try to do anything else with utilities, or demons may fly out of your nose. This especially includes looking at the sign or magnitude, and comparing between agents. I call these things “numerology” or “interacting with an unshielded utility”.

The default for utilities is that utilities are monolithic and inseparable from the entire outcome they are associated with. It takes special structure in your utility function to be able to talk about the marginal utility of something independently of particular outcomes.

We have to use the differenceandratio ritual to summon the utilities into the real numbers. Record utilities using explicit units and datum, and use dimensionless form for your calculations, which will make many things much clearer and more robust.

If you use a VNM basis, you don’t need a concept of awfulness, just awesomeness.

If you want to do philosophy about the shape of your utility function, make sure you phrase it in terms of lotteries, because that’s what utility is about.

The desire to use VNM is just another moral intuition in the great project of moral philosophy. It is conceivable that you will have to throw it out if it causes too much trouble.

VNM says nothing about your utility function. Consequentialism, hedonism, utilitarianism, etc are up to you.
 We Don’t Have a Utility Function by 2 Apr 2013 3:49 UTC; 46 points) (
 Terminology suggestion: Say “degrees utility” instead of “utils” to prompt affine thinking by 19 May 2013 8:03 UTC; 11 points) (
 Utility Quilting by 7 Apr 2013 23:48 UTC; 7 points) (
 6 May 2013 22:56 UTC; 5 points) 's comment on Pascal’s Muggle: Infinitesimal Priors and Strong Evidence by (
 Meetup : Group Decision Making (the good, the bad, and the confusion of welfare economics) by 30 Apr 2013 16:18 UTC; 4 points) (
 23 Oct 2013 1:08 UTC; 3 points) 's comment on Is it immoral to have children? by (
 16 Jul 2013 14:14 UTC; 2 points) 's comment on An Attempt at Preference Uncertainty Using VNM by (
 12 Aug 2013 13:51 UTC; 1 point) 's comment on Meetup : Durham NC/Triangle Area: “Rationality is hard! Let’s do math” (Utility Functions/Prospect Theory) by (
 2 Apr 2013 17:21 UTC; 1 point) 's comment on We Don’t Have a Utility Function by (
 28 Jan 2013 18:43 UTC; 0 points) 's comment on Pascal’s Mugging for bounded utility functions by (
 16 Oct 2013 2:50 UTC; 0 points) 's comment on On the importance of taking limits: Infinite Spheres of Utility by (
What I took away from this post is that confusing a decisiontheoretic utility function with hedonic utility will make you very sick, and you might have to go to the hospital. I like this.
Stay safe!
It would be interesting to see more exposition and discussion of hedonic utility. For example, why is there a distinction between positive and negative hedonic utility (i.e., hedons vs dolors), which do not seem to make decision theoretic sense? Has anyone proposed a design for an AI or reinforcement learning agent that can be said to make use of hedonic utility, which might help explain its evolutionary purpose?
I think I’m not quite understanding your question.
If I’ve understood you correctly, you’re asking why we’re wired to respond differently to avoiding changes that make us lesshappy (that is, avoiding dolors) than to seeking out changes which make us morehappy (that is, seeking out hedons) even if the magnitude of the change is the same. For example, why avoiding a loss motivates us differently than gaining something of equivalent value.
If that’s right, can you clarify why you expect an answer beyond “historical reasons”?
That is, we have a lot of independent systems for measuring “hedons” and “dolors” in different modalities; we respond to grief with different circuits than we respond to pain, for example. We create this hypothetical construct of an intermodal “hedon/dolor” based on people’s lottery behavior… do I prefer a 50% chance of losing my husband or having an elephant jump up and down on my leg for ten minutes, and so forth. And we know that people have inconsistent lottery behaviors and can be Dutch booked, so a “hedon/dolor” is at best an idealization of what are in humans several different inconsistentlycommensurable units of happiness and unhappiness.
Is there anything else that needs to be explained, here?
It sounds like you’re assuming that this juryrigged system was specifically selected for, and you want to know what exerted the selection pressure, when there doesn’t seem to be any reason to assume that it’s anything more than the best compromise available between a thousand independentlyselectedfor motivational systems operating on the same brain.
Or have I misunderstood your question?
I’s not clear that the two can be reconciled. It’s also not clear that the two can’t be reconciled.
Suppose for simplicity there are just hedons and dolors into which every utilitarian reaction can be resolved and which are independent. Then every event occupies a point in a plane. Now, ordering real numbers (hedons with no dolorific part or dolors with no hedonic part) is easy and more or less unambiguous. However, it’s not immediately obvious whether there’s a useful way to specify an order over all events. A zero hedon, one dolor event clearly precedes a one hedon, zero dolor event in the suck>win ordering. But what about a one hedon, one dolor event vs. a zero hedon, zero dolor event?
It might seem like that we can simply take the signed difference of the parts (so in that last example, 11=00 so the events are ‘equal’), but the stipulation of independence seems like it forbids such arithmetic (like subtracting apples from oranges).
Orders on the complex numbers that have been used for varying applications (assuming this has been done) might shed some light on the matter.
Clearly a CEV over all complex (i.e. consisting of exactly a possiblyzero hedonic part and possiblyzero dolorific part) utilities would afford comparison between any two events, but this doesn’t seem to help much at this point.
Beyond knowledge of the physical basis of pleasure and pain, brain scans of humans experiencing masochistic pleasure might be a particularly efficient insight generator here. Even if, say, pure pleasure and pure pain appear very differently on an MRI, it might be possible to reduce them to a common unit of utilitarian experience that affords direct comparison. On the other hand, we might have to conclude that there are actually millions of incommensurable ‘axes’ of utilitarian experience.
It can be proven that there is no ordering of complex numbers which is compatible with the normal conventions of multiplication and addition. It’s not even possible to reliably seperate complex numbers into “positive” and “negative”, such that multiplying two positive numbers gives a positive number, multiplying a positive number by −1 gives a negative number, multiplying a negative number by −1 gives a positive number, and −1 is negative.
To further complicate the matter, I don’t think that hedons and dolors are fully independant; if you place the ‘hedons’ line along the xaxis, the ‘dolors’ line may be a diagonal. Or a curve.
That settled that quickly. Thanks.
Then I suppose the next question in this line would be: To what extent can we impose useful orders on R^2? (I’d need to study the proof in more detail, but it seems that the nogo theorem on C arises from its ring structure, so we have to drop it.) I’m thinking the place to start is specifying some obvious properties (e.g. an outcome with positive hedonic part and zero dolorific part always comes after the opposite, i.e. is better), though I’m not sure if there’d be enough of them to begin pinning something down.
Edit: Or, oppositely, chipping away at suspect ring axioms and keeping as much structure as possible. Though if it came down to casechecking axioms, it might explode.
The most useful order on R^2 seems to be an order by the absolute value. (That is to say, the distance from the origin.) This is an ordering that has many uses, and gives us certain insights into the structure of the space. (Note though that it is only a partial order, not a complete one, as you can have two different points with the same absolute value.)
Yeah, absolute value is the secondmost obvious one, but I think it breaks down:
It seems that if we assume utility to be a function of exactly (i.e. no more and no less than) hedons and dolors in R^2, we might as well stipulate that each part is nonnegative because it would then seem that any sense of dishedons must be captured by dolors and vice versa. So it seems that we may assume nonegativity WLOG. Then given nonnegativity of components, we can actually compare outcomes with the same absolute value:
Given nonnegativity, we can simplify (I’m pretty sure, but even if not, I think a slightly modified argument still goes through) our metric from sqrt(h^2+d^2) (where h,d are the hedonic and dolorific parts) to just d+h. Now suppose that (h1,d1) and (h2,d2) are such that h1+d1=h2+d2. Then:
1) If h1d2 and so (h1,d1) is clearly worse than (h2,d2)
2) If h1=h2, then d1=d2 and equipreferable
3) If h1>h2, then d1<d2 and so (h1,d1) is clearly better than (h2,d2)
So within equivalence classes there will be differing utilities.
Moreover, (0,2)<<(0,0)<<(2,0) but the LHS and RHS fall in the same equivalence classs under absolute value. So the intervals of utility occupied by equivalence classes can overlap. (Where e.g. ‘A<<B’ means ‘B is preferable over A’.)
Hence absolute value seems incompatible with the requirements of a utility ordering.
~
The most obvious function of (h,d) to form equivalence classes is h minus d as in my earlier comment, but that seems to break down (if we assume every pair of elements in a given equivalence class has the same utility) by its reliance on fungibility of hedons and dolors. A ‘marginal dolor function’ that gives the dolorworth of the next hedon after already having x hedons seems like it might fix this, but it also seems like it would be a step away from practicality.
You are correct, it does break down like that. Actually, for some reason I wasn’t thinking of a space where you want to maximize one value and minimize another, but one where you want to maximize both. That is a reasonable simplification, but it does not translate well to our problem.
Another potential solution if you want to maximize hedons and dolors, you could try sorting by the arguments of points. (i.e. maximize tan(hedons/dolors) or in other words, (given that both hedons and dolors are positive), maximize hedons/dolors itself.)
Ultimately, I think you need some relation between hedons and dolors, something like “one hedon is worth −3.67 dolors” or similar. In the end, you do have have to choose whether (1 hedon, 1 dolor) is preferable to (0 hedons, 0 dolors). (And also whether (2 hedons, 1 dolor) is preferable to (1 hedon, 0 dolors), and whether (1 hedon, 2 dolors) is preferable to (0 hedons, 1 dolor), and so forth.)
I suspect this relation would be linear, as the way we have defined hedons and dolors seems to suggest this, but more than that has to be left up to the agent who this utility system belongs to. And on pain of lack of transitivity in his or her preferences, that agent does seem to need to have one relation like this or another.
Then 0.002 hedons and 0.00001 dolors is 20 times better than 10 hedons and 1 dolor. This would be surprising.
That’s linear, with a scaling factor. If it is linear, then the scaling factor doesn’t really matter much (‘newdolors’ can be defined as ‘dolors’ times the scaling factor, then one hedon is equal to one newdolor). But if it’s that simple, then it’s basically a single line that we’re dealing with, not a plane.
There are any number of possible alternative (nonlinear) functions; perhaps the fifth power of the total number of dolors is equivalent to the fourth power of the number of hedons? Perhaps, and this I consider far more likely, the actual relationship between hedons and dolors is nowhere near that neat...
I would suspect that there are several different, competing functions at use here; many of which may be counterproductive. For example; very few actions produce ten billion hedons. Therefore, if I find a course of action that seems (in advance) to produce ten billion or more hedons, then it is more likely that I am mistaken, or have been somehow fooled by some enemy, than that my estimations are correct. Thus, I am automatically suspicious of such a course of action. I don’t dismiss it outofhand, but I am extremely cautious in proceeding towards that outcome, looking out for the hidden trap.
Yeah, I guess I more or less take this for granted. Or, rather, not that they’re incommensurable, exactly, but that the range of correspondences—how many Xs are worth a Y—is simply an artifact of what set of weighting factors was most effective, among those tested, in encouraging our ancestors to breed, which from our current perspective is just an arbitrary set of historical factors.
I think it might be due to the type of problem we are facing as living entities. We have a consistent never ending goal of “not killing ourselves” and “not mucking up our chances of reproduction”. Pain is one of the signs that we might be near doing these things. Every day we manage not to do these things is in some way a good day. This presents a baseline of utility where anything less than it is considered negative and anything more than that positive. So it just might be what this type of algorithm feels like from the inside.
This is a misrepresentation of the mathematical point of view. In particular, the word “were” is misleading: when I say things like this, I am referring to a property of the map, not a property of the territory. The mathematical keyword is equivalence relation; when I say that utility functions are only welldefined up to positive affine transformations, what I mean is that “utility function” does not mean “function on outcomes,” it means “equivalence class of functions on outcomes,” where positive affine transformations define the equivalence relation. There are other equivalent ways of describing what a utility function is that don’t require working with equivalence classes, but it doesn’t matter which one you pick in the sense that the resulting mathematical theory has the same mathematical consequences.
Thanks for correcting me! I changed that paragraph. Is it less offensive to people who know what they are talking about now?
It’s somewhat confusing to me; you’re using words like “set,” “space,” and “measure distances” that have mathematically precise meanings but in a way which appears to disagree with those mathematically precise meanings (I don’t know what you mean when you say that a utility function is a space). It might be helpful to nonmathematicians, though.
I mean set as in settheory. As in the utility function is a set of equivalent functions. If I’m disagreeing with math use, please correct me. (on second thought that wording is pretty bad, so I might change it anyway. Still, are my setintuitions wrong?)
I mean space as in a 1dimensional space (with a noncrazy metric, if crazy metrics even exist for 1d). By “measure distance” I mean go into said space with a tape measure and see how far apart things are.
I call it a space because then when I visualize it as such, it has all the right properties (scale/shift agnosticism).
If I call it a realvalued function, I imagine the real number line, which has a labeled axis, so to speak, so it tempts me to do numerology.
You can think of a utility function as defining a measure of “signed distance” on its domain.
Utilities have some similarity to distance in physical space, in that to give coordinates to all objects you need to select some origin and system of units for your coordinate system, but the physical reality is the same regardless of your coordinate system. A member of a particular utility function’s equivalence class, can then be thought of as a function that gives the coordinates of each thing in the domain (worldstates, presumably), in some particular coordinate system.
For an example, if I prefer to have three ice creams over zero, three times as much as I prefer one ice cream over zero, then we can write that as a “utility function”
u(no ice cream) = 0; u(one ice cream) = 1; u(three ice creams) = 3
. In this case we have chosen arbitrarilyno ice cream
as the origin of our coordinate system, and “distance betweenone ice cream
andnone
” as the basic unit of distance.Is this what you mean by a 1dimensional space?
That’s exactly what I mean.
Four servings of ice cream would have me ill.
Got it. This is strictly speaking true, but “equivalence class of functions” would be a more precise way of putting it.
So there are some technical points I could go into here, but the short story is that most equivalence classes under positive affine transformations are 2dimensional, not 1dimensional, and also aren’t naturally endowed with a notion of distance.
I can see how distance would be trouble in 2d affineequivalent spaces, but distance seems to me to be a sensible concept in a 1d space, even with positivescale and shift. And utility is 1d, so it’s safe to call it a “distance” right?
Maybe you’re referring to distancefromAtoB not having a meaningful value without defining some unit system? Maybe we should call them “relative distances”, except that to me, “distance” already connotes relativeness.
I’m not sure what you mean by this.
This is a totally sensible point of view but disagrees with the mathematical definition. It also doesn’t apply directly to the 2dimensional equivalence classes, as far as I can tell. For example, suppose we’re talking about utilities over two possible outcomes {heads, tails}. There are three equivalence classes here, which are u(heads) > u(tails), u(heads) = u(tails), and u(heads) < u(tails). The first and third equivalence classes are 2dimensional. What is the distance between the two functions (u(heads) = 2, u(tails) = 1) and (u(heads) = 3, u(tails) = 2) in the first case, even in a relative sense?
Ohhhhhhh, do you mean 2d as in 2 degrees of freedom? I mean it as in spatial coordinates.
As an aside, I just realized that “displacement” is more accurate for what I’m getting at than “distance”. The thing I’m talking about can be negative.
And distance/displacement isn’t between equivalent utility functions, it’s between two outcomes in one utility function. “X is 5 tasty sandwiches better than Y” is what I’m referring to as a displacement.
And the displacement numbers will be the same for the entire equivalence class, which is why I prefer it to picking one of the equivalent functions out of a hat. If you only ever talk about measured distances, there is only one utility function in the equivalence class, because all the scales and shifts cancel out:
Was that less clear than I thought?
If there are only two points in a space, you can’t get a relative distance because there’s nothing to make the distance relative to. For that problem I would define U(heads) = 1 and U(tails) = 0, as per my dimensionless scheme.
What’s the difference?
Your use of the word “in” here disagrees with my usage of the word “utility function.” Earlier you said something like “a utility function is a space” and I defined “utility function” to mean “equivalence class of functions over outcomes,” so I thought you were referring to the equivalence class. Now it looks like you’re referring to the space of (probability distributions over) outcomes, which is a different thing. Among other things, I can talk about this space without specifying a utility function. A choice of utility function allows you to define a ternary operation on this space which I suppose could reasonably be called “relative displacement,” but it’s important to distinguish between a mathematical object and a further mathematical object you can construct from it.
Yes, it does. You seem to understand what I’m getting at.
I don’t think anyone is making mathematical errors in the actual model, we are just using different words which makes it impossible to communicate. If you dereference my words in your model, you will see errors, and likewise the other way.
Is there a resource where I could learn the correct terminology?
Yep.
My conventions for describing mathematical objects comes from a somewhat broad range of experiences and I’m not sure I could recommend a specific resource that would duplicate the effect of all of those experiences. Recommending a range of resources would entail learning much more than just a few conventions for describing mathematical objects, and you may not feel that this is a good use of your time, and I might agree. I can at least broadly indicate that some useful mathematical subjects to read up on might be real analysis and topology, although most of the content of these subjects is not directly relevant; what’s relevant is the conventions you’ll pick up for describing mathematical objects.
Sometime soon I might write a Discussion post about mathematics for rationalists which will hopefully address these and other concerns.
Upvoted for promise of Mathematics for Rationalists.
I feel confused. “a space I can measure distances in” is a strong property of a value, and it does not follow from your initial 5 axioms, and seems contrary to the 5th axiom.
In fact, your own examples given further seem to provide a counterexample—i.e., if someone prefers being a whale to 400 actual orgasms, but prefers ^{1}⁄_{400} of being a whale to 1 orgasm, then both “being a whale” and “orgasm” have some utility value, but they cannot be used as units to measure distance.
If you’re in a reality where a>b and 2a<2b, then you’re not allowed to use classic arithmetic simply because some of your items look like numbers, since they don’t behave like numbers.
“Hawaii” can’t be used as a unit to measure distance, nor can “the equator”, but “the distance from Hawaii to the equator” can. Similarly, “the difference between 0 orgasms and 1 orgasm” can be used as a unit to measure utilities (you could call this unit “1 orgasm”, but that would be confusing and silly if you had nonlinear utility in orgasms: 501 orgasms could be less than or more than “1 orgasm” better than 500).
Also, did you mean to have these the other way around?:
While this is a basic point, it’s one people seem to screw up around here a lot, so I’m glad someone wrote an article going over this in detail. Upvoted.
I have one nitpick: You say, “We have to take the ratio between two utility differences”, but really, because only positive affine transformations are OK, what we really have to take is the ratio between a utility difference and the absolute value of a utility difference.
Tangentially, I’d also like to point out the article Torsors Made Easy by John Baez. OK, to be honest, I’m not sure how understandable this really is to someone who doesn’t already know a bit. But “torsor” is a useful concept to have when thinking about things like this, and there probably isn’t a better quick explanation out there.
Having read that article years ago, without any previous exposure to the concept of torsors (other than the implicit exposures Baez notes, that everyone’s had), torsors also came to mind for me when reading nyan_sandwich’s article.
Why? Positive affine transformations are OK, and they don’t affect the sign of utility differences.
Yes; the point of making this change is to exclude negative affine transformations.
Ooops, you are totally right. Your units have to be absolute value. Thank you, I’ll maybe fix that.
Your “dimensionless” example isn’t dimensionless; the dimensions are units of (satandate—whalefire).
You only get something like a reynolds number when the units cancel out, so you’re left with a pure ratio that tells you something real about your problem. Here you aren’t cancelling out any units, you’re just neglecting to write them down, and scaling things so that outcomes of interest happen to land at 0 and 1. Expecting special insight to come out of that operation is numerology.
Great article other than that, though. I hadn’t seen this quote before: “We have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate.” For me that really captures the essence of it.
Hmm. You are right, and I should fix that. When we did that trick in school, we always called it “dimensionless”, but you are right it’s distinct from the pitheorem stuff (reynolds number, etc). I’ll rethink it.
Edit: Wait a minute, on closer inspection, your criticism seems to apply to radians (why radius?) and reynolds number (characteristic length and velocity are rather arbitrary in some problems).
Why are some unit systems “dimensionless”, and others not? More relevently, taboo “dimensionless”, why are radians better (as they clearly are) than degrees or grads or arcminutes? Why is it useful to pick the obvious characteristic lengths and velocities for Re, as opposed to something else.
For radians, it seems to be something to do with euler’s identity and the mathematical foundations of sin and cos, but I don’t know how arbitrary those are, off the top of my head.
For Re, I’m pretty sure it’s exactly so that you can do numerology by comparing your reynolds number to reynolds numbers in other problems where you used the same charcteristic length (if you used D for your L in both cases, your numerology will work, if not, not).
I think this works the same in my “dimensionless” utility tricks. If we are consistent about it, it lets us do (certain forms of) numerology without hazard.
Some ratios are dimensionless because the numerator and denominator are in the same dimension, so they cancel. for example, a P/E (price to earnings) ratio of a stock. The numerator & denominator are both in $ (or other currency).
Radians are a ratio of lengths (specifically, arc length to radius) whereas degrees are the same ratio multiplied by an arbitrary constant (180/pi). We could imagine that halfradians (the ratio of arc length to diameter) might also be a natural unit, and then we’d have to go into calculus to make a case for radians, but degrees and arcminutes are right out.
Lengths offer one degree of freedom because they lack units but not an origin (all lengths are positive, and this pinpoints a length of 0). For utilities, we have two degrees of freedom. One way to convert such a quantity to a dimensionless one is to take (U1  U2)/(U1  U3), a dimensionless function of three utilities.
This is more or less what you’re doing in your “dimensionless utility” section. But it’s important to remember that it’s a function of three arguments: 0.999 is the value obtained from considering Satan, paperclips, and whales simultaneously. It is only of interest when all three things are relevant to making a decision.
Incidentally, there’s a typo in your quote about Re: 103 should be 10^3.
I was actually thinking of diameterradians when I wrote that, but I didn’t know what they were called, so somehow I didn’t make up a name. Thanks.
Ok good, that’s what I was intending to do, maybe it should be a bit clearer?
Shamelessly ripped from wikipedia; their typo. 10^3 does seem more reasonable. Thanks.
For the record, I was also making up a name when I said “halfradians”. And now that I think about it, it should probably be “twiceradians”, because two radians make one twiceradian. Oops.
“Awesome” is an emotional reaction whereas “utility” (as you point out in this post) is a technical and not entirely intuitive concept in decision theory. Maybe one ought to be derived from the other, but it’s not right to just implicitly assume that they are the same thing.
Unlike some recent posts about VNM, you don’t say what “outcome” means. If we take an outcome to be a world history, then “being turned into a whale for a day” isn’t an outcome.
I’m having trouble reconciling this with ‘You already know that you know how to compute “Awesomeness”, and it doesn’t feel like it has a mysterious essence that you need to study to discover.’
I’m pretty sure VNM, or just the concept of utility function, implies consequentialism (but not the other two).
(These comments occurred to me as I read the OP right after it was posted, but I waited a while to see if anyone else would make the same points. No one did, which makes me wonder why.)
Yes, good point. I was just listing words that people tend to throw around for that sort of problem. “awesome” is likewise not necessarily “good”. I wonder how I might make that clearer...
Thanks for pointing this out. I forgot to substantiate on that. I take “turned into a whale for a day” to be referring to the probability distribution over total world histories consistent with current observations and with the turnedintoawhaleonthisday constraint.
Maybe I should have explained what I was doing… I hope no one gets too confused.
“Awesomeness” is IMO the simplest effective pointer to morality that we currently have, but that morality is still inconsistent and dynamic. I take the “moral philosophy” problem to be working out in explicit detail what exactly is awesome and what isn’t, from our current position in moralityspace, with all its metaintuitions. I think this problem is incredibly hard to solve completely, but most people can do better than usual by just using “awesomeness”. I hope this makes that clearer?
In some degenerate sense, yes, but you can easily think up a utility function that cares what rules you followed in coming to a decision, which is generally not considered “consequentialism”. It is after all part of the world history and therefor available to the utility function.
We may have reached the point where we are looking at the problem in more detail than “consequentialism” is good for. We may need a new word to distinguish mere VNM from rulesdon’tmatter type stuff.
I don’t think this works for your post, because “turned into a whale for a day” implies I’m probably living in a universe with magic, and my expected utility conditional on that would be mostly determined by what I expect will happen with the magic for the rest of time, rather the particular experience of being a whale for a day. It would no longer make much sense to compare the utility of “turned into a whale for a day” with “day with an orgasm” and “day without an orgasm”.
It’s possible that I judged your previous post too harshly because I was missing the “most people” part. But what kind of people do you think can do better by using “awesomeness”? What about, for example, Brian Tomasik, who thinks his morality mostly has to do with reducing the amount of negative hedons in the universe (rather than whales and starships)?
Ooops. I suppose to patch that, we have to postulate that we at least believe that we live in a world where a wizard turning you into a whale is normal enough that you don’t totally reevaluate everything you believe about reality, but rare enough that it would be pretty awesome.
Thanks for catching that. I can’t believe I missed it.
I would put that guy in the “needs awesomeism” crowd, but maybe he would disagree, and I have no interest in pushing it.
I don’t much like his “morality as hostile memewarfare” idea either. In fact, I disagree with almost everything in that post.
Last night, someone convinced me to continue on this writing trend that the OP is a part of, and end up with a sane attack, or at least scouting mission, on moral philosophy and CEV or CEVlike strategies. I do have some ideas that haven’t been discussed around here, and a competent cophilosopher, so if I can merely stay on the rails (very hard), it should be interesting.
EDIT: And thanks a lot for your critical feedback; it’s really helpful given that so few other people come up with useful competent criticism.
What do you mean by “don’t like”? It’s epistemically wrong, or instrumentally bad to think that way? I’d like to see your reaction to that post in more detail.
It seems to me that people made a lot more competent critical comments when Eliezer was writing his sequences, which makes me think that we’ve driven out a bunch of competent critics (or they just left naturally and we haven’t done enough to recruit replacements).
The more I think about “awesomeness” as a proxy for moral reasoning, the less awesome it becomes and the more like the original painful exercise of rationality it looks.
see this
tl;dr: don’t dereference “awesome” in verballogical mode.
It’s too late for me. It might work to tell the average person to use “awesomeness” as their black box for moral reasoning as long as they never ever look inside it. Unfortunately, all of us have now looked, and so whatever value it had as a black box has disappeared.
You can’t tell me now to go back and revert to my original version of awesome unless you have a supply of blue pills whenever I need them.
If the power of this tool evaporates as soon as you start investigating it, that strikes me as a rather strong point of evidence against it. It was fun while it lasted, though.
You seem to be generalizing from one example. Have you attempted to find examples of people who have looked inside the box and not destroyed its value in the process?
I suspect that the utility of this approach is dependent on more than simply whether or not the person has examined the “awesome” label, and that some people will do better than others. Given the comments I see on LW, I suspect many people here have looked into it and still find value. (I will place myself into that group only tentatively; I haven’t looked into it in any particular detail, but I have looked. OTOH, that still seems like strong enough evidence to call “never ever look inside” into question.)
“Not tonight honey, I’ve determined that I have a ^{1}⁄_{399} chance of being a whale!”
“What if I give you two orgasms?”
“Sorry, my utility function isn’t linear in orgasms!”
Doesn’t have to be. Two orgasms is almost certainly better than a ^{1}⁄_{399} whaleday if you are indifferent between one orgasm and ^{1}⁄_{400} whale day.
In other words, that’s some pretty intense nonlinearity you’ve got there.
Can’t wait till I have time to write my linearity post...
Should be straightforward.
Only locally.
“Sure, but surely 2 orgasms are better than 1, so, since you’re at ^{1}⁄_{399} for turning into a whale, and a single orgasm is equal to ^{1}⁄_{400} chance of turning into a whale, so wouldn’t two orgasms be good enough to at least require ^{1}⁄_{398} chance of turning into a whale?”
I’d like that, but let’s stay on topic here.
This post is excellent. Part of this is the extensive use of clear examples and the talking through of anticipated sticking points, objections, and mistakes, and its motivating, exploratory approach (not plucked out of thin vacuum).
I think you didn’t specify strong enough premises to justify this deduction; I think you didn’t rule out cases where your utility function would depend on probability and outcome in such a way that simply multiplying is invalid. I might have missed it.
Edit: D’oh! Never mind. This is the whole point of an Expected Utility theorem...
I doubt that significantly better compression is possible. I expect that communicating, uncompressed, the outcome and the probability is necessary, so stronger compression seems doubtful than what you did, which seems minimal with respect to those constraints. However, you might have been referring to clarity more generally.
I would avoid the use of some of the more grim examples in this context. Putting nonconsensual, violent sex, torture, and ruination of vulnerable people through mental manipulation alongside ice cream, a day as a whale, and a sunny day would overstep my flippantempatheticGentledepressing threshold, and it seems like it would be possible to come up with comparably effective examples that didn’t. Make of that what you will. (I encourage others to reply with their own assessment, particularly those who also felt (slightly) uncomfortable on this point, since I imagine their activation energy for saying so would be highest.)
Yeah, the violent rape and torture jarred unpleasantly with me as well. I liked the other examples and the post in general.
I see what you guys are getting at, but it was useful to go somewhere hellish to demonstrate certain invariances, and the quoted comment was too good to pass up. I could have used more sensitive examples, but it did go through my mind that I wanted to make it horrible for some reason… I won’t change it, but will steer away from such examples in the future.
That said, it’s interesting that people react to the thought of rape and torture, but not the universe getting paperclipped, which is many many orders of magnitude worse.
I get more angry at a turtle getting thrown against the wall than I do at genocides… I guess some things just hit you hard out of proportion to their actual value.
I guess rape and torture hit closer to home for some people… no one has ever actually experienced the universe getting paperclipped, nor is it remotely likely to happen tomorrow. Lots of very real people will be raped and tortured tomorrow, though.
Thanks for taking on board the remarks!
Ooops, you tried to feel a utility. Go directly to type theory hell; do not pass go, do not collect 200 utils.
I don’t think this example is evidence against trying to ‘feel’ a utility. You didn’t account for scope insensitivity and the qualitative difference between the two things you think you’re comparing.
You need to compare the feeling of the turtle thrown against the wall to the cumulative feeling when you think about EACH individual beheading, shooting, orphaned child, open grave, and every other atrocity of the genocide. Thinking about the vague concept “genocide” doesn’t use the same part of your brain as thinking about the turtle incident.
That was eminently readable. Thank you.
I hope you don’t mind if I ask for elaboration?
I’m fairly unlikely to read a dry, mathy post pointing out mistakes that people make when wielding utility in making decisions. Clear, humorous examples help, as does making abstract things concrete when possible—the radioactive utilities made me laugh. The post was fairly long, but the summary wrapped things up nicely.
It seems simple enough to me when making decisions under certainty, you only need an acyclic preference ordering. The reals are ordered and acyclic, but they also have scale. You don’t need that scale under certainty, but you need it to encode probabilistic preferences under uncertainty.
Well put, but there is a way that scaleutilities partially show up in economics when you try to factor outcomes, even without uncertainty. It does all cash out to just a preference ordering on the monolithic outcome level, though.
It might be interesting to learn if anyone active in this community, has actually defined their utility function, stated it publicly and attempted to follow through.
m
I don’t trust the transitivity axiom of VNM utility. Thought I should mention this to make it clear that the “most of us” in your post is not a rhetorical device and there really are actual people who don’t buy into the VNM hegemony.
Thanks for pointing that out. I did try to make it clear that the essay was about “if you trust VNM, here’s what it means”.
I, for one, trust the transitivity axiom. It seems absurd to value going in circles, but only if you run into the right lotteries.
Maybe you could give an example of a preference cycle you think is valuable, so they rest of us could see where our intuitions diverge?
Out of curiosity, why don’t you trust the transitivity axiom?
Because when I introspect on my preferences it doesn’t seem to hold.
Examples?
Answering for myself, my unreflective preferences are nontransitive on problems like dust specks vs torture. I prefer N years of torture for X people to N years minus 1 second of torture for 1000X people, and any time of torture for X people over the same time of very slightly less painful torture for 1000X people, and yet I prefer a very slight momentary pain for any number of people, however large, to 50 years of torture for one person.
If I ever reverse the latter preference, it will be because I will have been convinced by theoretical/abstract considerations that non transitive preferences are bad (and because I trust the other preferences in the cycle more), but I don’t think I will ever introspect it as a direct preference by itself.
Nicely put.
So suppose we use the dust specks vs. torture situation to construct a cycle of options A1, A2, …, An, in which you prefer A1 to A2, A2 to A3, and so on, and prefer An to A1. (For example, say that A1 is 50 years of torture for one person, and the other options spread things out over more people up until An is dust specks for lots of people.)
If you were asked to choose between any of the options A1 through An, which one do you pick? And why?
That might depend strongly on the fillingin details and on how the choice is framed. I can’t visualize all the options and compare them together, so I always end up comparing the nearby cases and then running through the loop. I suspect that forced to make the choice I would say An (the dust specks) but more because of it being a Schelling point than any substantial, defensible reason. And I would say it while still endorsing A(n1) to be better than An.
Can you give an example? I am having a hard time imagining preferences contradicting that axiom (which is a failure on my part).
Typo: it’s meditation, not mediation.
What a disaster! Thank you.
Amartya Sen argues (it’s discussed in his Nobel prize lecture: http://www.nobelprize.org/nobel_prizes/economics/laureates/1998/senlecture.pdf) that social choice theory requires making some interpersonal comparisons of utility, as without some such comparisons there is no way to evaluate the utility of total outcomes. However, the interpersonal comparisons do not need to be unlimited; just having some of them can be enough. Since interpersonal comparisons certainly do raise issues, they doubtless require some restrictions similar to those you mention for the individual case, which seems to be why Sen takes it as a very good thing that restricted interpersonal comparisons may be sufficient.
I think that interpersonal “utility” is a different beast from VNM utility. VNM is fundamentally about sovereign preferences, not preferences within an aggregation.
Inside moral philosophy we have an intuition that we ought to aggregate preferences of other people, and we might think that using VNM is a good idea because it is about preferences too, but I think this is an error, because VNM isn’t about preferences in that way.
We need a new thing built from the ground up for utilitarian preference aggregation. It may turn out to have similarities to VNM, but I would be very surprised if it actually was VNM.
Are you familiar with the debate between John Harsanyi and Amartya Sen on essentially this topic (which we’ve discussed ad nauseam before)? In response to an argument of Harsanyi’s that purported to use the VNM axioms to justify utilitarianism, Sen reaches a conclusion that broadly aligns with your take on the issue.
If not, some useful references here.
ETA: I worry that I’ve unduly maligned Harsanyi by associating his argument too heavily with Phil’s post. Although I still think it’s wrong, Harsanyi’s argument is rather more sophisticated than Phil’s, and worth checking out if you’re at all interested in this area.
Oh wow.
No, not at all. You can’t derive mathematical results by playing word games. Even if you could, it doesn’t even make sense to take the average utility of a population. Different utility functions are not commensurable.
No. That is not at all how it works. A deterministic coin toss will end up the same in all everett branches, but have subjective probability distributed between two possible worlds. You can’t conflate them; they are not the same.
Having your math rely on a misinterpreted physical theory is generally a bad sign...
Really? Translate the axioms into statements about people. Do they still seem reasonable?
Completeness. Doesn’t hold. Preferred by who? The fact that we have a concept of “pareto optimal” should raise your suspicions.
Transitivity. Assuming you can patch Completeness to deal with paretooptimality, this may or may not hold. Show me the math.
Continuity. Assuming we let population frequency or some such stand in for probability. I reject the assumption that strict averaging by population is valid. So much for reasonable assumptions.
Independence. Adding another subpopulation to all outcomes is not necessarily a noop.
Other problems include the fact that population can change, while the sum of probabilities is always 1. The theorem probably relies on this.
Assuming you could construct some kind of coherent populationaveraging theory from this, it would not involve utility or utility functions. It would be orthogonal to that, and would have to be able to take into account egalitarianism and population change, and varying moral importance of agents and such.
Shocking indeed.
While I’m in broad agreement with you here, I’d nitpick on a few things.
Agree that decisiontheoretic or VNM utility functions are not commensurable—they’re merely mathematical representations of different individuals’ preference orderings. But I worry that your language consistently ignores an older, and still entirely valid use of the utility concept. Other types of utility function (hedonic, or welfarist more broadly) may allow for interpersonal comparisons. (And unless you accept the possibility of such comparisons, any social welfare function you try to construct will likely end up running afoul of Arrow’s impossibility theorem).
I’m actually pretty much OK with Axioms 1 through 3 being applied to a population social welfare function. As Wei Dai pointed out in the linked thread (and Sen argues as well), it’s 4 that seems the most problematic when translated to a population context. (Dealing with varying populations tends to be a stumbling block for aggregationist consequentialism in general.)
That said, the fact that decision utility != substantive utility also means that even if you accepted that all 4 VNM axioms were applicable, you wouldn’t have proven average utilitarianism: the axioms do not, for example, rule out prioritarianism (which I think was Sen’s main point).
I ignore it because they are entirely different concepts. I also ignore aerodynamics in this discussion. It is really unfortunate that we use the same word for them. It is further unfortunate that even LWers can’t distinguish between an apple and an orange if you call them both “apple”.
“That for which the calculus of expectation is legitimate” is simply not related to interagent preference aggregation.
I’m hesitant to get into a terminology argument when we’re in substantive agreement. Nonetheless, I personally find your rhetorical approach here a little confusing. (Perhaps I am alone in that.)
Yes, it’s annoying when people use the word ‘fruit’ to refer to both apples and oranges, and as a result confuse themselves into trying to derive propositions about oranges from the properties of apples. But I’d suggest that it’s not the most useful response to this problem to insist on using the word ‘fruit’ to refer exclusively to apples, and to proceed to make claims like ‘fruit can’t be orange coloured’ that are false for some types of fruit. (Even more so when people have been using the word ‘fruit’ to refer to oranges for longer than they’ve been using it to refer to apples.) Aren’t you just making it more difficult for people to get your point that apples and oranges are different?
On your current approach, every time you make a claim about fruit, I have to try to figure out from context whether you’re really making a claim about all fruit, or just apples, or just oranges. And if I guess wrong, we just end up in a pointless and avoidable argument. Surely it’s easier to instead phrase your claims as being about apples and oranges directly when they’re intended to apply to only one type of fruit?
P.S. For the avoidance of doubt, and with apologies for obviousness: fruit=utility, apples=decision utility, oranges=substantive utility.
“Fruit” is a natural category; apples and oranges share interesting characteristics that make it useful to talk about them in general.
“Utility” is not. The two concepts, “that for which expectation is legitimate”, and some quantity related to interagent preference aggregation do not share very many characteristics, and they are not even on the same conceptual abstraction layer.
The VNMstuff is about decision theory. The preference aggregation stuff is about moral philosophy. Those should be completely firewalled. There is no value to a superconcept that crosses that boundary.
As for me using the word “utility” in this discussion, I think it should be unambiguous that I am speaking of VNMstuff, because the OP is about VNM, and utilitarianism and VNM do not belong in the same discussion, so you can infer that all uses of “utility” refer to the same thing. Nevertheless, I will try to come up with a less ambiguous word to refer to the output of a “preference function”.
But surely the intuition that value ought to be aggregated linearly across “possible outcomes” is related to the intuition that value ought to be aggregated linearly across “individuals”? I think it basically comes down to independence: how much something (a lottery over possible outcomes / a set of individuals) is valued should be independent of other things (other parts of the total probabilistic mixture over outcomes / other individuals who exist).
When framed this way, the two problems in decision theory and moral philosophy can be merged together as the question of “where should one draw the boundary between things that are valued independently?” and the general notion of “utility” as “representation of preference that can be evaluated on certain objects independently of others and then aggregated linearly” does seem to have value.
This doesn’t seem to me to argue in favour of using wording that’s associated with the (potentially illegitimate) superconcept to refer to one part of it. Also, the post you were responding to (conf)used both concepts of utility, so by that stage, they were already in the same discussion, even if they didn’t belong there.
Two additional things, FWIW:
(1) There’s a lot of existing literature that distinguishes between “decision utility” and “experienced utility” (where “decision utility” corresponds to preference representation) so there is an existing terminology already out there. (Although “experienced utility” doesn’t necessarily have anything to do with preference or welfare aggregation either.)
(2) I view moral philosophy as a special case of decision theory (and e.g. axiomatic approaches and other tools of decision theory have been quite useful in to moral philosophy), so to the extent that your firewall intends to cut that off, I think it’s problematic. (Not sure that’s what you intend—but it’s one interpretation of your words in this comment.) Even Harsanyi’s argument, while flawed, is interesting in this regard (it’s much more sophisticated than Phil’s post, so I’d recommend checking it out if you haven’t already.)
One thing you didn’t address that was uncertainty about preferences. Specifically, will I die of radiation poisoning if I use VNM utility to make decisions when I’m uncertain about what my preferences even are? I.e., maximize expected utility, where the expectation is taken over my uncertainty about preferences in addition to any other uncertainty.
I thought you took a position on this and was about to comment on it but I couldn’t find what you said about it in the post! Apparently my brain deduced a conclusion on this issue from your post, then decided to blame/give credit to you.
Yeah I totally sidestepped that issue because I don’t know how to solve it. I don’t think anyone knows, actually. Preference uncertainty is an open problem, AFAIK.
Yes. You can’t compare or aggregate utilities from different utility functions. So at present, you basically have to pick one and hope for the best.
Eventually someone will have to build a new thing for preference uncertainty. It will almost surely degenerate to VNM when you know your utility function.
There are other problems that also sink naive decision theory, like acausal stuff, which is what UDT and TDT try to solve, and anthropics, which screw up probabilities. There’s a lot more work on those than on preference uncertainty, AFAIK.
This is exactly what my brain claimed you said :) Now I can make my comment.
Game theorists do this all the time—at least economists. They’ll create a game, then say something like “now let’s introduce noise into the payoffs” but the noise ends up being in the utility function. Then they go and find an equilibrium or something using expected utility.
Now every practical example I can think of off the top of my hand, you can reinterpret the uncertainty as uncertainty about actual outcomes with utilities associated with those outcomes and the math goes through. Usually the situation is something like letting U($)=$ for simplicity because risk aversion is orthogonal to what they’re interested in, so you can easily think about the uncertainty as being over $ rather than U($). This simplicity allows them to play fast and loose with VNM utility and get away with it, but I wouldn’t be surprised if someone made a model where they really do mean for the uncertainty to be over one’s own preferences and went ahead and used VNM utility. In any case, no one ever emphasized this point in any of the econ or game theory courses I’ve taken, grad or otherwise.
In case you’re still interested
Thanks!
If you can do that, it seems to work; Noise in the payoffs is not preference uncertainty, just plain old uncertainty. So I guess my question is what does it look like when you can’t do that, and what do we do instead?
If you can do that, it seems to work; Noise in the payoffs is not preference uncertainty, just plain old uncertainty. So I guess my question is what does it look like when you can’t do that, and what do we do instead?
You can at least simplify the problem somewhat by applying VNM utility using each of the candidate utility functions, and throwing out all solutions that do not appear in any of them. If you think you like either carrots or apples, you’re not going to go to the store and buy asparagus.
The other thorny issue is that uncertainty in the utility function makes learning about your utility function valuable. If you think you like either carrots or apples, then taking two grocery trips is the best answer—on the first trip you buy a carrot and an apple and figure out which one you like, and on the second trip you stock up.
The other thing is that I don’t think it’s possible to model uncertainty inside your utility function—you can only have uncertainty about how you evaluate certain events. If you don’t know whether or not you like carrots, that’s a fact about eating carrots and not one about how to decide whether or not to eat a carrot. I think that every uncertainty about a utility function is just a hidden uncertainty about how the being the experiences utility works.
Let me be specific about the math. Suppose you have a lottery L with a 1/3rd chance of result A and a 2/3rd chance of result B. Suppose furthermore that you are uncertain about whether you enjoy things as in U1 or U2, with equal probability of each. L is equivalent to a lottery with ^{1}⁄_{6} chance (A, U1), ^{1}⁄_{3} chance (B, U1), etc. Now you can make the first utility function of this exercise that takes into account all your uncertainty about preferences.
Note that U1 and U2 aren’t numbers—it’s how much you enjoy something if your preferences are as in U1.
What this lets us do is convert “there’s a chance I get turned into a whale and I’m not sure if I will like it” into “there’s a chance that I get turned into a whale and like it, and another chance that I get turned into a whale and don’t like it”.
Ooops. Radiation poisoning. Utility is about planning, not experiencing or enjoying.
I went through the math a couple days ago with another smart philosophertype. We are pretty sure that this (adding preference uncertainty as an additional dimension of your ontology) is a fully general solution to preference uncertainty. Unfortunately, it requires a bit of moral philosophy to pin down the relative weights of the utility functions. That is, the utility functions and their respective probabilities is not enough to uniquely identify the combined utility function. Which is actually totally ok, because you can get that information from the same source where you got the partial utility functions.
I’ll go through the proof and implications/discussion in an upcoming post. Hopefully. I don’t exactly have a track record of following through on things...
Right, to get that answer you need to look inside your utility function… which you’re uncertain about. Stated differently, your utility function tells you how to deal with uncertainty about your utility function, but that’s another thing you’re uncertain about. But luckily your utility function tells you how do deal with uncertainty about uncertainty about your utility function… I think you can see where this is going.
Naively, my intuition is that simply adding uncertainty about preferences as part of your ontology isn’t enough because of this regress—you still don’t even know in principle how to choose between actions without more precise knowledge of your utility function. However, this regress sounds suspiciously like the sort of thing that once formalized precisely isn’t really a problem at all—just “take the limit” as it were.
That’s not the issue we ran into.
Your (partial) utility functions do not contain enough information to resolve uncertainty between them. As far as I can tell, utility functions can’t contain metapreferences.
You can’t just pull a correct utility function out of thin air, though. You got the utility function from somewhere; it is the output of a moralphilosophy process. You resolve the uncertainty with the same informationsource from which you constructed the partial utility functions from in the first place.
No need to take the limit or do any extrapolation (except that stuff like that does seem to show up inside the moralphilosophy process.)
I think we’re using “utility function” differently here. I take it to mean the function containing all information about your preferences, preferences about preferences, and higher level metapreferences. I think you’re using the term to refer to the function containing just objectlevel preference information. Is that correct?
Now that I make this distinction, I’m not sure VNM utility applies to metapreferences.
It doesn’t, AFAIK, which is why I said your utility function does not contain metapreference and the whole moral dynamic. “utility function” is only a thing in VNM. Using it as a shorthand for “my whole reflective decision system” is incorrect use of the term, IMO.
I am not entirely sure that your utility function can’t contain metapreference, though. I could be convinced by some wellplaced mathematics.
My current understanding is that you put the preference uncertainty into your ontology, extend your utility function to deal with those extra dimensions, and lift the actual moral updating to epistemological work over those extra ontologyvariables. This still requires some level of preliminary moral philosophy to shoehorn your current incoherent godshattersoup into that formal framework.
I’ll hopefully formalize this some day soon to something coherent enough to be criticized.
I look forward to it!
Nice catch on the radiation poisoning. Revised sentence:
Also
This is 100% expected, since utility functions that vary merely by a scaling factor and changing the zero point are equivalent.
I think we’re talking about the same thing when you say “adding preference uncertainty as an additional dimension of your ontology”. It’s kind of hard to tell at this level of abstraction.
Thanks nyan, this was really helpful in comprehending what you told me last time. So if I understand you correctly, utilities are both subjective and descriptive. They only identify what a particular single agent actually prefers under uncertain conditions. Is this right? If so, how do we take into account situations where one is not sure what one wants? Being turned into a whale might be as awesome as being turned into a gryphon, but since you don’t (presumably) know what either would be like, how do you calculate your expected payoff?
Can you link me to or in some way dereference “what I told you last time”?
If you have a probability distribution over possible utility values or something, I don’t know what to do with it. It’s a type error to aggregate utilities from different utility functions, so don’t do that. That’s the moral uncertainty problem, and I don’t think there’s a satisfactory solution yet. Though Bostrom or someone might have done some good work on it that I haven’t seen.
For now, it probably works to guess at how good it seems relative to other things. Sometimes breaking it down into a more detailed scenario helps, looking at it a few different ways, etc. Fundamentally though, I don’t know. Maximizing EU without a real utility function is hard. Moral philosophy is hard.
My bad, nyan.
You were explaining to me the difference between utility in Decision theory and utility in utilitarianism. I will try to find the thread later.
Thanks.
Are all those ostensibly unintentional typos an inside joke of some kind?
No, they are due solely to autocorrect, sloppy writing and haste. I will try to be more careful, apologies.
You know you can go back and fix them right?
Done.
...Am I the only who is wondering how being turned into a hale would even work and whether or not that would be awesome?
Probably not possible since it isn’t even a noun.
Hale is a noun, alright.
I’ve been very entertained by this framing of the problem—very fun to read!
I find it strange that you claim the date with Satan is clearly the best option, but almost in the same breath say that the utility of whaling in the lake of fire is only 0.1% worse. It sounds like your definition of clarity is a little bit different from mine.
On the Satan date, souls are tortured, steered toward destruction, and tossed in a lake of fire. You are indifferent to those outcomes because they would have happened anyway (we can grant this a premise of the scenario). But I very much doubt you are indifferent to your role in those outcomes. I assume that you negatively value having participated in torture, damnation, and watching others suffer, but it’s not immediately clear if you had already done those things on the previous 78044 days.
Are you taking into account duration neglect? If so, is the pain of rape only slightly worse than burning in fire?
This probably sounds nitpicky; the point I’m trying to make is that computing utilities using the human brain has all kinds of strange artifacts that you probably can’t gloss over by saying “first calculate the utility of all outcomes as a number then compare all your numbers on relative scale”. We’re just not built to compute naked utilities without reference anchors, and there does not appear to be a single reference anchor to which all outcomes can be compared.
Your system seems straightforward when only 2 or 3 options are in play, but how do you compare even 10 options? 100? 1000? In the process you probably do uncover examples of your preferences that will cause you to realize you are not VNMcompliant, but what rule system do you replace it with? Or is VNM correct and the procedure is to resolve the conflict with your own broken utility function somehow?
TL;DR: I think axiom #1 (utility can be represented as a single real number) is false for human hardware, especially when paired with #5.
That was one of the major points. Do not play with naked utilities. For any decision, find the 0 anchor and the 1 anchor, and rank other stuff relative to them.
Yep, you are not VNM compliant, or the whole excercise would be worthless. The philosophy involved in actually making your preferences consistent is hard of course. I swept that part under the rug.
I understood your major point about the radioactivity of the single real number for each utility, but I got confused by what you intended the process to look like with your hell example. I think you need to be a little more explicit about your algorithm when you say “find the 0 anchor and the 1 anchor”. I defaulted to a generic idea of moral intuition about best and worst, then only made it as far as thinking it required naked utilities to find the anchors in the first place. Is your process something like: “compare each option against the next until you find the worst and best?”
It is becoming clear from this and other comments that you consider at least the transitivity property of VNM to be axiomatic. Without it, you couldn’t find what is your best option if the only operation you’re allowed to do is compare one option against another. If VNM is required, it seems sort of hard to throw it out after the fact if it causes too much trouble.
What is the point of ranking other stuff relative to the 0 and 1 anchor if you already know the 1 anchor is your optimal choice? Am I misunderstanding the meaning of the 0 and 1 anchor, and it’s possible to go less than 0 or greater than 1?
Yes, approximately.
I consider all the axioms of VNM to be totally reasonable. I don’t think the human decision system follows the VNM axioms. Hence the project of defining and switching to this VNM thing; it’s not what we already use, but we think it should be.
VNM is required to use VNM, but if you encounter a circular preference and decide you value running in circles more than the benefits of VNM, then you throw out VNM. You can’t throw it out from the inside, only decide whether it’s right from outside.
Expectation. VNM isn’t really useful without uncertainty. Without uncertainty, transitive preferences are enough.
If being a whale has utility 1, and getting nothing has utility 0, and getting a sandwich has utility ^{1}⁄_{500}, but the whaledeal only has a probability of ^{1}⁄_{400} with nothing otherwise, then I don’t know until I do expectation that the ^{1}⁄_{400} EU from the whale is better than the ^{1}⁄_{500} EU from the sandwich.
I think I have updated slightly in the direction of requiring my utility function to conform to VNM and away from being inclined to throw it out if my preferences aren’t consistent. This is probably mostly due to smart people being asked to give an example of a circular preference and my not finding any answer compelling.
I think I see the point you’re trying to make, which is that we want to have a normalized scale of utility to apply probability to. This directly contradicts the prohibition against “looking at the sign or magnitude”. You are comparing ^{1}⁄_{400} EU and ^{1}⁄_{500} EU using their magnitudes, and jumping headfirst into the radiation. Am I missing something?
You are allowed to compare. Comparison is one of the defined operations. Comparison is how you decide which is best.
I’m uneasy with this “normalized”. Can you unpack what you mean here?
What I mean by “normalized” is that you’re compressing the utility values into the range between 0 and 1. I am not aware of another definition that would apply here.
Your rule says you’re allowed to compare, but your other rule says you’re not allowed to compare by magnitude. You were serious enough about this second rule to equate it with radiation death.
You can’t apply probabilities to utilities and be left with anything meaningful unless you’re allowed to compare by magnitude. This is a fatal contradiction in your thesis. Using your own example, you assign a value of 1 to whaling and ^{1}⁄_{500} to the sandwich. If you’re not allowed to compare the two using their magnitude, then you can’t compare the utility of ^{1}⁄_{400} chance of the whale day with the sandwich, because you’re not allowed to think about how much better it is to be a whale.
There’s something missing here, which is that “1/400 chance of a whale day” means “1/400 chance of whale + ^{399}⁄_{400} chance of normal day”. To calculate the value of “1/400 chance of a whale day” you need to assign a utility for both a whale day and a normal day. Then you can compare the resulting expectation of utility to the utility of a sandwhich = ^{1}⁄_{500} (by which we mean a sandwich day, I guess?), no sweat.
The absolute magnitudes of the utilities don’t make any difference. If you add N to all utility values, that just adds N to both sides of the comparison. (And you’re not allowed to compare utilities to magic numbers like 0, since that would be numerology.)
I notice we’re not understanding each other, but I don’t know why. Let’s step back a bit. What problem is “radiation poisoning for looking at magnitude of utility” supposed to be solving?
We’re not talking about adding N to both sides of a comparison. We’re talking about taking a relation where we are only allowed to know that A < B, multiplying B by some probability factor, and then trying to make some judgment about the new relationship between A and xB. The rule against looking at magnitudes prevents that. So we can’t give an answer to the question: “Is the sandwich day better than the expected value of ^{1}⁄_{400} chance of a whale day?”
If we’re allowed to compare A to xB, then we have to do that before the magnitude rule goes into effect. I don’t see how this model is supposed to account for that.
You can’t just multiply B by some probability factor. For the situation where you have
p(B) = x, p(C) = 1  x
, your expected utility would bexB + (1x)C
. ButxB
by itself is meaningless, or equivalent to the assumption that the utility of the alternative (which has probability1  x
) is the magic number 0. “1/400 chance of a whale day” is meaningless until you define the alternative that happens with probability ^{399}⁄_{400}.For the purpose of calculating
xB + (1x)C
you obviously need to know the actual values, and hence magnitudes of x, B and C. Similarly you need to know the actual values in order to calculate whether A < B or not. “Radiation poisoning for looking at magnitude of utility” really means that you’re not allowed to compare utilities to magic numbers like 0 or 1. It means that the only thing you’re allowed to do with utility values is a) compare them to each other, and b) obtain expected utilities by multiplying by a probability distribution.[edited out emotional commentary/snark]
If you can’t multiply B by a probability factor, then it’s meaningless in the context of xB + (1x)C, also. xB by itself isn’t meaningless; it roughly means “the expected utility on a normalized scale between the utility of the outcome I least prefer and the outcome I most prefer”. nyan_sandwich even agrees that 0 and 1 aren’t magic numbers, they’re just rescaled utility values.
I’m 99% confident that that’s not what nyan_sandwich means by radiation poisoning in the original post, considering the fact that comparing utilities to 0 and 1 is exactly what he does in the hell example. If you’re not allowed to compare utilities by magnitude, then you can’t obtain an expected utility by multiplying by a probability distribution. Show the math if you think you can prove otherwise.
It’s getting hard to reference back to the original post because it keeps changing with no annotations to highlight the edits, but I think the only useful argument in the radiation poisoning section is: “don’t use units of sandwiches, whales, or orgasms because you’ll get confused by trying to experience them”. However, I don’t see any good argument for not even using Utils as a unit for a single person’s preferences. In fact, using units of Awesomes seems to me even worse than Utils, because it’s easier to accidentally experience an Awesome than a Util. Converting from Utils to unitless measurement may avoid some infinitesimal amount of radiation poisoning, but it’s no magic bullet for anything.
Oh, I was going to reply to this, and I forgot.
All this business with radiation poisoning is just a roundabout way of saying the only things you’re allowed to do with utilities are “compare two utilities” and “calculate expected utility over some probability distribution” (and rescale the whole utility function with a positive affine transformation, since positive affine transformations happen to be isomorphisms of the above two calculations).
Looking at utility values for any other purpose than comparison or calculating expected utilities is a bad idea, because your brain will think things like “positive number is good” and “negative number is bad” which don’t make any sense in a situation where you can arbitrarily rescale the utility function with any positive affine transformation.
“xB + (1x)0” which is formally equivalent to “xB” means “the expected utility of B with probability p and the outcome I least prefer on a normalized scale with probability (1p)”, yes. The point I’m trying to make here though is that probability distributions have to add up to 1. “Probability p of outcome B” — where p < 1 — is a type error, plain and simple, since you haven’t specified the alternative that happens with probability (1p). “Probability p of outcome B, and probability (1p) of the outcome I least prefer” is the closest thing that is meaningful, but if you mean that you need to say it.
Unless you rescale everything so that magic numbers like 0 and 1 are actually utilities of possibilities under consideration.
But that’s like cutting corners in the lab; dangerous if you don’t know what you are doing, but useful if you do.
If you don’t conform to VNM, you don’t have a utility function.
I think you mean to refer to your decision algorithms.
No, I mean if my utility function violates transitivity or other axioms of VNM, I more want to fix it than to throw out VNM as being invalid.
then it’s not a utility function in the standard sense of the term.
I think what you mean to tell me is: “say ‘my preferences’ instead of ‘my utility function’”. I acknowledge that I was incorrectly using these interchangeably.
I do think it was clear what I meant when I called it “my” function and talked about it not conforming to VNM rules, so this response felt tautological to me.
My problem with VNMutility is that while in theory it is simple and elegant, it isn’t applicable to real life because you can only assign utility to complex world states (a nontrivial task) and not to limited outcomes. If you have to choose between $1 and a 10% chance of $2, then this isn’t universally solvable in real life because $2 doesn’t necessarily have twice the value of $1, so the completeness axiom doesn’t hold.
Also, assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can’t assign any utility to actually infinite immortality, or you can’t differentiate between higherquality and lowerquality immortality, or you can’t represent utility as a real number.
Neither of these problems is solved by replacing utility with awesomeness.
Could you explain that? Representing the quality of each day of your life with a real number from a bounded range, and adding them up with exponential discounting to get your utility, seems to meet all those criteria.
Indeed, already figured that out here.
Do you mean it’s not universally solvable in the sense that there is no “I always prefer the $1”type solution? Of course there isn’t. That doesn’t break VNM, it just means you aren’t factoring outcomes properly.
Interesting...
That’s what I mean, and while it doesn’t “break” VNM, it means I can’t apply VNM to situations I would like to, such as torture vs dust specks. If I know the utility of 1000 people getting dust specks in their eyes, I still don’t know the utility of 1001 people getting dust specks in their eyes, except it’s probably higher. I can’t quantify the difference between 49 and 50 years of torture, which means I have no idea whether it’s less than, equal to, or greater than the difference between 50 and 51 years. Likewise, I have no idea how much I would pay to avoid one dust speck (or 1000 dust specks) because there’s no ratio of u($) to u(dust speck), and I have absolutely no concept how to compare dust specks with torture, and even if I had, it wouldn’t be scalable.
VNM is not a complete theory of moral philosophy, and isn’t intended to be. I tried to make that clear in OP by discussing how much work VNM does and does not do (with a focus on what it does not do).
All it does is prevent circular preferences and enforce sanity when dealing with uncertainty. It does not have anything at all to say about torture vs dust specs, the shape of utility curves, (in)dependence of outcome factors, or anything about the structure of your utility function, because none of those are problems of circular preference or risksanity.
From wiki:
Nonetheless, people read into it all sorts of prescriptions and abilities that it does not have, and then complain when they discover that it does not actually have such powers, or don’t discover such, and make all sorts of dumb mistakes. Hence the OP.
VNM is a small statement on the perhiphery of a very large, very hard problem. Moral Philosophy is hard, and there are (so far) no silver bullets. Nothing can prevent you from having to actually think about what you prefer.
Yes, I am aware of that. The biggest trouble, as you have elaborately explained in your post, is that people think they can perform mathematical operations in VNMutilityspace to calculate utilities they have not explicitly defined in their system of ethics. I believe Eliezer has fallen into this trap, the sequences are full of that kind of thinking (e.g. torture vs dust specks) and while I realize it’s not supposed to be taken literally, “shut up and multiply” is symptomatic.
Another problem is that you can only use VNM when talking about complete world states. A day where you get a tasty sandwich might be better than a normal day, or it might not be, depending on the world state. If you know there’s a wizard who’ll give you immortality for $1, you’ll chose $1 over any probability<1 of $2, and if the wizard wants $2, the opposite applies.
VNM isn’t bad, it’s just far, far, far too limited. It’s somewhat useful when probabilities are involved, but otherwise it’s literally just the concept of wellordering your options by preferability.
Turns out this is not actually true: 1 day is 1, 2 days is 1.5, 3 days is 1.75, etc, immortality is 2, and then you can add quality. Not very surprising in fact, considering immortality is effectively infinity and ℕ < ℝ. Still, I’m pretty sure the set of all possible world states is of higher cardinality than ℝ, so...
(Also it’s a good illustration why simply assigning utility to 1 day of life and then scaling up is not a bright idea.)
You can talk about probability distributions over worldstates as well. When I say “tasty sandwich day minus normal day” I mean to refer to the expected marginal utility of the sandwich, including the possibilities with wizards and stuff. This simplifies things a bit, but goes to hell as soon as you include probability updating, or actually have to find that value.
Money is like utility—it is society’s main represenation of utility.
Be careful, money is an economics and game theory thing that can give you evidence about people’s preferences, but I would not directly call it a representation of utility. It is likewise not directly relevant to utilitarianism.
I’d like to take a crack at discussing how money (and fungiblish consumablish resources in general) relate to utility, but that’s a big topic on it’s own, and I think it’s beyond the scope of this article.
Something like “Money: The Unit of Caring” by EY?
Money is a form of value. It has an equivalent to wireheading—namely hyperinflation. And we have maximisers of share value—namely companies. So: money is a kind of utility—though obviously not the utility of people.
Is it?
Are there any expectedmoney maximizers in the world? (risk aversion is not allowed; utility is linear in utility, so if money is utility, it must have linear utility. Does it?)
Does anyone value money for its own sake? Or do they value what it can buy?
Is money a quantity associated with an entire worldhistory?
It seems accurate to say that it’s treated in a utilitylike way within certain incentive systems, but actually calling it a form of utility seems to imply a kind of agency that all the moneyoptimizers I can think of don’t have. Except perhaps for automated trading systems, and even those can have whatever utility curves over money that their designers feel like setting.
You don’t think economic systems have “agency”? Despite being made up of large numbers of humans and optimising computer systems?
Not really, no. They have goals in the sense that aggregating their subunits’ goals gives us something of nonzero magnitude, but their ability to make plans and act intentionally usually seems very limited compared to individual humans’, never mind wellprogrammed software. Where we find exceptions, it’s usually because of an exceptional human at the helm, which of course implies more humanlike and less moneyoptimizerlike behavior.
Right. So, to a first approximation, humans make reasonable moneyoptimizers. Thus the “Homo economicus” model.
I think it is perfectly reasonable to say that companies have “agency”. Companies are powerfully agentlike entities, complete with mission statements, contractual obligations and reputations. Their ability to make plans and act intentionally is often superhuman. Also, in many constitutuencies they are actually classified as legal persons.
So, money is a representation of utility. Representations of utilities don’t have to be “linear in utility”. I already said “obviously not the utility of people”, so whether people value money for its own sake doesn’t seem very relevant.
Perhaps a better point of comparison for money would be with utilityrelated signals in the brain—such as dopamine.
I don’t like having to say “representation of utility”. Representations are all we have. There is no utility apart from representations.
This is a difference to utility. Not a similarity. Wireheading gives low utility (for most plausible utility functions) but huge measures for other things that are not utility, like ‘happiness’. It is the reason it would be utterly absurd to say “The federal government can print arbitrarily large amounts of utility”.
You can approximate (or legislate) companies that way and. It wouldn’t be quite as inaccurate as saying “we have homo economicus” but it’d be a similar error.
The statements following “So” do not follow from the statements preceding it. The preceding statements are respectively negatively relevant and irrelevant. So “So” does not fit between them.
There is a relationship between money and utility. It is not an “is a kind of” relationship. (If Nyan takes a crack at explaining what the actual relationship is between fungible resources and utility it will most likely be worth reading.)
Thanks for the encouragement! I do plan to do that soon, but I am hardly an agent that can be described as following through on “plans”.
So: I was using “utility” there to mean “representation of utility”. In fact I did previously say that money was a “representation of utility”.
This is a case where there are only really representations. Utility is defined by its representations (in its “that which is maximised” sense). Without a representation, utility doesn’t really exist.
To be more sepcific about about hyperinflation, that is a form of utility counterfeitting. It’s on the “wireheading” side, rather than the “pornography” side. This isn’t really an analogy, but an instance of another phenomenon in the same class. Hyperinflation produces poor outcomes for countries, just as wireheading produces poor outcomes for those that choose it. This is a similarity—not a difference. I am not sure why you don’t recognise the relationship here. Are you sure you that have thought the issue through?
Money is like utility but different.
It does not work like that for riskaverse people. Instead, you’d compare 400 guaranteed Odays to 1 Wday. If you want to introduce uncertainty, you have to include a term for disutility of risk, which differs between people who agree on 400 O = 1 W. I am not sure how this term looks, but it’s unquestionably there. Looking forward to the post where you discuss it.