I didn’t see OP explaining how preference model accuracy is increased by having more dimensions. Rather, I don’t think OP is even modeling the same thing that I’m modeling.
If I say that I don’t (contrary to popular belief) have a car, and you reply that I’m confused about what cars are for, then something has gone wrong with your reasoning.
OP didn’t say he doesn’t have a car, from my point of view OP says that he doesn’t need a car, because a car can’t cook for him.
What does “exist” mean, here?
It means “has been conjectured, proven, talked about or etc”. Nothing fancy.
despite any alleged multidimensionality of experience <...>
This is a weird thing to say. Multidimensionality of experience is not being questioned. The proposition that the entirety of human mental state can be meaningfully compressed to one number is stupid to the extent that I doubt any one has ever seriously suggested it in the entire human history. The problem is that OP argues against this trivially false claim, and treats it as some problem of utility. My response is that utility fails to express the entire human experience, because it is not for expressing the entire human experience. The same way that a car fails at cooking because it is not for cooking.
After all, we can only construct your utility function after we already know what you think the best outcome is!
No, we can construct a utility function after we have verified the axioms (or just convinced ourselves that they should work). This is easier than actually ranking every possible outcome.
<earlier> only agents whose preferences satisfy the axioms, have a utility function
This is actually a flawed perspective. I guess it’s indicative of your belief that utility has no practical applications. If my preferences don’t satisfy the axioms, that only means that no utility function will describe my preferences perfectly. But some functions might approximate them and there could still be practical benefit to using them.
Who’s “we”? I, personally, find it to be of abstract mathematical interest, no more. (This was also more or less the view of Oskar Morgenstern himself.)
Well, I guess that explains something. I guess we should expand on this, but I struggle to understand why you think this, or what you think that I think.
OP didn’t say he doesn’t have a car, from my point of view OP says that he doesn’t need a car, because a car can’t cook for him.
“I don’t have a car” is exactly how I read the OP. Sibling comment seems to confirm this reading.
we can construct a utility function after we have verified the axioms (or just convinced ourselves that they should work). This is easier than actually ranking every possible outcome.
This is a novel claim! How do we do this? It seems manifestly false!
If my preferences don’t satisfy the axioms, that only means that no utility function will describe my preferences perfectly. But some functions might approximate them and there could still be practical benefit to using them.
What does “approximate” mean, here? Let’s recall that according to the VNM theorem, if an agent’s preferences satisfy the axioms, then
there exists a real-valued function u defined by possible outcomes such that every preference of the agent is characterized by maximizing the expected value of u
In other words, for a VNM-compliant agent attempting to decide between outcomes A and B, A will be preferred to B if and only if u(A) > u(B).
If, however, the agent is VNM-noncompliant, then for any utility function u, there will exist at least one pair of outcomes A, B such that A is preferred to B, but u(A) < u(B).
This means that using the utility function as a guide to decision-making is guaranteed to violate the agent’s preferences in at least some case.
Such an agent then has two choices:
a) He can ignore his own preferences, and use the utility function as a means of decision-making; or
b) He can evaluate the utility function’s output by comparing it to his own preferences, deferring to the latter when the two conflict.
Choosing (a) seems completely unmotivated. And if the agent chooses (b), well, then what’s the point of the utility function to begin with? Just do what you prefer.
In fact, I struggle to see how “just do what you prefer” isn’t a superior strategy in any case, compared to constructing, and then following, a utility function, given that we have to elicit all of an agent’s preferences in order to construct the utility function to begin with!
And what does that leave us with, in terms of uses for a utility function? It’s not like we can do interpersonal utility comparisons (such operations are completely meaningless under VNM); which means we can’t aggregate VNM-utility across persons. So what practical benefit is there? How do we “use” a utility function, even assuming a VNM-compliant agent (quite an assumption) and assuming we can elicit all the agent’s preferences and construct the thing (another big assumption)? What do we do with it?
I didn’t mean immediately. I meant that assuming the axioms allows you to compute or at least bound the expected utility of some lotteries with respect to Eu of other lotteries.
If, however, the agent is VNM-noncompliant, then for any utility function u, there will exist at least one pair of outcomes A, B such that A is preferred to B, but u(A) < u(B).
Yes, that’s what “approximate” means, especially if B is preferred to most other possible outcomes C.
In fact, I struggle to see how “just do what you prefer” isn’t a superior strategy in any case
“Just do what you prefer” is awful. Preference checking is in many cases not cheap or easy. The space of possible outcomes is vast. The chances to get dutch booked are many. Your strategy can hardly be reasoned about.
It’s not like we can do interpersonal utility comparisons
Adding up people’s utilities doesn’t particularly interest me, so I don’t want to say too much, but the arguments would be pretty similar to the above. The common theme is that you fail to appreciate the value of exchanging exact correctness for computability.
I meant that assuming the axioms allows you to compute or at least bound the expected utility of some lotteries with respect to Eu of other lotteries.
How? Demonstrate, please.
“Just do what you prefer” is awful. Preference checking is in many cases not cheap or easy. The space of possible outcomes is vast.
If I can’t check my preferences over a pair of outcomes, then I can’t construct my utility function to begin with. You have yet to say or show anything that even approaches a rebuttal to this basic point.
The chances to get dutch booked are many.
Again: demonstrate. I tell you I follow the “do what I prefer” strategy. Dutch book me! I offer real money (up to $100 USD). I promise to consider any bet you offer (less those that are illegal where I live).
Your strategy can hardly be reasoned about.
What difficulties do you see (that are absent if we instead construct and then use a utility function)?
Edited to add:
Adding up people’s utilities doesn’t particularly interest me, so I don’t want to say too much, but the arguments would be pretty similar to the above. The common theme is that you fail to appreciate the value of exchanging exact correctness for computability.
I don’t think you understand how fundamental the difficulty is. Interpersonal comparison, and aggregation, of VNM-utility is not hard. It is not even uncomputable. It is undefined. It does not mean anything to speak of comparing it between agents. It is, literally, mathematical nonsense, like comparing ohms to inches, or adding kilograms to QALYs. You can’t “approximate” it, or do a “not-exactly-correct” computation, or anything like that. There’s nothing to approximate in the first place!
If I can’t check my preferences over a pair of outcomes, then I can’t construct my utility function to begin with.
I think you’re confusing outcomes with lotteries. To build utility I need make comparisons for unique outcomes. E.g. if I know that A < B then I no longer need to think if 0.5A + 0.5C < 0.5B + 0.5C (as per axiom of independence). You, on the other hand, need to separately evaluate every possible lottery.
I tell you I follow the “do what I prefer” strategy. Dutch book me!
I also need you to explain to me in what ways your “do what I prefer” violates the axioms, and how that works. I’m waiting for that in our other thread. To be clear, I might be unable to dutch book you, for example, if you follow the axioms in all cases except some extreme or impossible scenarios that I couldn’t possibly reproduce.
It is not even uncomputable. It is undefined. It does not mean anything to speak of comparing it between agents. It is, literally, mathematical nonsense, like comparing ohms to inches, or adding kilograms to QALYs.
Step 1: build utility functions for several people in a group. Step 2: normalize the utilities based on the assumption that people are mostly the same (there are many ways to do it though). Step 3: maximize the sum of expected utilities. Then observe what kind of strategy you generated. Most likely you’ll find that the strategy is quite fair and reasonable to everyone. Voila, you have a decision procedure for a group of people. It’s not perfect, but it’s not terrible either. All other criticisms are pointless. The day that I find some usefulness in comparing ohms to inches, I will start comparing ohms to inches.
What difficulties do you see (that are absent if we instead construct and then use a utility function)?
For example, I get the automatic guarantee that I can’t be dutch booked. Not only do you not have this guarantee, you can’t have any formal guarantees at all. Anything is possible.
if I know that A < B then I no longer need to think if 0.5A + 0.5C < 0.5B + 0.5C (as per axiom of independence).
Irrelevant, because that doesn’t save you from having to check whether you prefer A to C, or B to C.
To be clear, I might be unable to dutch book you, for example, if you follow the axioms in all cases except some extreme or impossible scenarios that I couldn’t possibly reproduce.
What’s this?! I thought you said “The chances to get dutch booked [if one does just does what one prefers] are many”! Are they many and commonplace, or are they few, esoteric, and possibly nonexistent? Why not at least present some hypotheticals to back up your claim? Where are these chances to get Dutch booked? If they’re many, then name three!
Step 1: build utility functions for several people in a group. Step 2: normalize the utilities based on the assumption that people are mostly the same (there are many ways to do it though). Step 3: maximize the sum of expected utilities.
So, in other words:
… Step 2: Do something that is completely unmotivated, baseless, and nonsensical mathematically, and, to boot, extremely questionable (to put it very mildly) intuitively and practically even if it weren’t mathematical nonsense. …
Like I said: impossible.
It’s not perfect, but it’s not terrible either.
Of course it’s terrible. In fact it’s maximally terrible, insofar as it has no basis whatsoever in any reality, and is based wholly on a totally arbitrary normalization procedure which you made up from whole cloth and which was motivated by nothing but wanting there to be such a procedure.
Not only do you not have this guarantee, you can’t have any formal guarantees at all. Anything is possible.
You said my strategy “can hardly be reasoned about”. What difficulties in reasoning about it do you see? “No formal guarantee of not being Dutch booked” does not even begin to qualify as such a difficulty.
Of course it’s terrible. In fact it’s maximally terrible, insofar as it has no basis whatsoever in any reality
A procedure is terrible if it has terrible outcomes. But you didn’t mention outcomes at all. Saying that it’s “nonsensical” doesn’t convince me of anything. If it’s nonsensical but works, then it is good. Are you, by any chance, not a consequentialist?
“No formal guarantee of not being Dutch booked” does not even begin to qualify as such a difficulty.
I’m somewhat confused why this doesn’t qualify. Not even when phrased as “does not permit proving formal guarantees”? Is that not a difficulty in reasoning?
Irrelevant, because that doesn’t save you from having to check whether you prefer A to C, or B to C.
“Irrelevant” is definitely not a word you want to use here. Maybe “insufficient”? I never claimed that you would need zero comparisons, only that you’d need way fewer. By the way, if I find B < C, I no longer need to check if A < C, which is another saving.
I thought you said “The chances to get dutch booked [if one does just does what one prefers] are many”!
No, it was supposed to be “The chances to get dutch booked [if one frequently exhibits preferences that violate the axioms] are many”. I have a suspicion that all of your preferences that violate the axioms happen to be ones that never influence your real choices, though I haven’t given up yet. You’re right that I should try to actually dutch book you with what I have, I’ll take some time to read your link form the other thread and maybe give it a try.
A procedure is terrible if it has terrible outcomes. But you didn’t mention outcomes at all. Saying that it’s “nonsensical” doesn’t convince me of anything. If it’s nonsensical but works, then it is good. Are you, by any chance, not a consequentialist?
I can’t imagine what you could possibly mean by “works”, here. What does it mean to say that your procedure “works”? That it generates answers? So does pulling numbers out of a hat, or astrology. That “works”, too.
Your procedure generates answers to questions of interpersonal utility comparison. This, according to you, means that it “works”. But those questions don’t make the slightest bit of sense in the first place! And so the answers are just as meaningless.
If I have a black box that can give me yes/no answers to questions of the form “is X meters more than Y kilograms”, can I say that this box “works”? Absurd! Suppose I ask it whether 5 meters is more than 10 kilograms, and it says “yes”. What do I do with that information? What does it mean? Suppose I use the box’s output to try to maximize “total number”. What the heck am I maximizing?? It’s not a quantity that has any meaning or significance!
I’m somewhat confused why this doesn’t qualify. Not even when phrased as “does not permit proving formal guarantees”? Is that not a difficulty in reasoning?
How is it? Why would it be? What practical problems does it present? What practical problems does it present even hypothetically (in any even remotely plausible scenario)?
“Irrelevant” is definitely not a word you want to use here. Maybe “insufficient”? I never claimed that you would need zero comparisons, only that you’d need way fewer.
Please avoid condescending language like “X is not a word you want to use”.
That aside, no, I definitely meant “irrelevant”. You said we can construct a utility function without having to rank outcomes. You’re now apparently retreating from that claim. This leaves the VNM theorem as useless in practice as I said at the start. Again, this was my contention:
(Incidentally, it’s not even correct to say that “utility is for choosing the best outcome”. After all, we can only construct your utility function after we already know what you think the best outcome is! Before we have the total ordering over outcomes, we can’t construct the utility function…)
And you have yet to make any sensible argument against this.
As for attempting to Dutch-book me, please, by all means, proceed!
Your procedure generates answers to questions of interpersonal utility comparison.
No, my procedure is a decision procedure that answers the question “what should our group do”. It’s a very sensible question. What it means for it to “work” is debatable, but I meant that the procedure would generate decisions that generally seem fair to everyone. I’ll be condescending again—it’s very bad that you can’t figure out what sort of questions we’re trying to answer here.
You said we can construct a utility function without having to rank outcomes.
Let me recap what our discussion on this topic looks like from my point of view. I said that “we can construct a utility function after we have verified the axioms”. You asked how. I understand why my first claim might have been misleading, as if the function poofed into existence with U(“eat pancakes”)=3.91 already set by itself. I immediately explain that I didn’t mean zero comparisons, I just meant fewer comparisons than you would need without the axioms (I wonder if that was misunderstood as well). You asked how. I then give a trivial example of a comparison that I don’t need to make if I used the axioms. Then you said that this is irrelevant.
Well, it’s not irrelevant, it’s a direct answer to your question and a trivial proof of my earlier claim. “Irrelevant” is not a reply I could have predicted, it took me completely by surprise. It is important to me to figure out what happened here. Presumably one (or both) of us struggles with the English language, or with basic logic, or just isn’t paying any attention. If we failed to communicate this badly on this topic, are we failing equally badly on all other topics? If we are, is there any point in continuing the discussion, or can it be fixed somehow?
No, my procedure is a decision procedure that answers the question “what should our group do”.
By the standards you seem to be applying, a random number generator also answers that question. Here’s a procedure: for any binary decision, flip a coin. Heads yes, tails no. Does it “work”? Sure. It “works” just as well as using your VNM utility “normalization” scheme.
What it means for it to “work” is debatable, but I meant that the procedure would generate decisions that generally seem fair to everyone.
Your procedure doesn’t. It can’t (except by coincidence). This is because it contains a step which is purely arbitrary, and not causally linked with anyone’s preferences, sense of fairness, etc.
This is, of course, without getting into the weeds of just what on earth it means for decisions to “generally” seem “fair” to “everyone”. (Each of those scare-quoted words conceals a black morass of details, sets of potential—and potentially contradictory—operationalizations, nigh-unsolvable methodological questions, etc., etc.) But let’s bracket that.
The fact is, what you’ve done is come up with a procedure for generating answers to a certain class of difficult questions. (A procedure, note, that does not actually work for at least two reasons, but even assuming its prerequisites are satisfied…) The problem is that those answers are basically arbitrary. They don’t reflect anything like the “real” answers (i.e. they’re not consistent with what our pre-existing understand of what the answers are or should be). Your method works [well, it doesn’t actually work, but if it did work, it would do so] only because it’s useless.
I understand why my first claim might have been misleading, as if the function poofed into existence with U(“eat pancakes”)=3.91 already set by itself. I immediately explain that I didn’t mean zero comparisons, I just meant fewer comparisons than you would need without the axioms (I wonder if that was misunderstood as well).
If that is indeed what you meant, then your claim has been completely trivial all along, and I dearly wish you’d been clear to begin with. Fewer comparisons?! What good is that?? How much fewer? Is it still an infinite number? (yes)
I am disappointed that this discussion has turned out to be yet another instance of:
You keep repeating that, but it remains unconvincing. What I need is a specific example of a situation where my procedure would generate outcomes that we could all agree are bad.
flip a coin. Heads yes, tails no. Does it “work”? Sure.
Let’s use this for an example of what kind of argument I’m waiting for from you. Suppose you (and your group) run into lions every day. You have to compare your preferences for “run away” and “get eaten”. A coin flip is eventually going to select option 2. Everyone in your group ends up dead, even though every single one of them individually preferred to live. Every outside observer would agree that they don’t want to use this sort of decision procedure for their own group. Therefore I propose that the procedure “doesn’t work” or is “bad”.
Fewer comparisons?! What good is that?? How much fewer? Is it still an infinite number? (yes)
Technically there is an infinite number of comparisons left, and also an infinite number of comparisons saved. I believe that in a practical setting this difference is not insignificant, but I don’t see an easy way to exhibit that. In part that’s because I suspect that you already save those comparisons in your practical reasoning, despite denying the axioms which permit it.
your claim has been completely trivial all along
Yes, it has, so your resistance to it did seem pretty weird to me. I personally believe that my other claims are quite trivial as well, but it’s really hard to tell misunderstandings from true disagreement. What I want to do, is figure out whether this particular misunderstanding came from my failure at writing or from your failure at reading.
For starters, after reading my first post, did you think, that I think, that the utility function poofed into existence with U(“eat pancakes”)=3.91 already set by itself, after performing zero comparisons? This isn’t a charitable interpretation, but I can understand it. How did you interpret my two attempts to clarify my point in the further comments?
I’d love to continue this discussion, but I’m afraid that the moderation policy on this site does not permit me to do so effectively, as you see. I’d be happy to take this to another forum (email, IRC, the comments section of my blog—whatever you prefer). If you’re interested, feel free to email me at myfirstname@myfullname.net (you could also PM me via LW’s PM system, but last time I tried using it, I couldn’t figure out how to make it work, so caveat emptor). If not, that’s fine too; in that case, I’ll have to bow out of the discussion.
In school I learned about utility in context of constructing decision problems. You rank the possible outcomes of a scenario in a preference ordering. You assign utilities to the possible outcomes, using an explicitly mushy, introspective process—unless money is involved, in which case the “mushy” step came in when you calibrated your nonlinear value-of-money function. You estimate probabilities where appropriate. You chug through the calculations of the decision tree and conclude that the best choice is the one that results probablistically in the best outcome as described by proxy as the outcome with the greatest probability-weighted utility.
That’s all good. Assuming you can actually do all of the above steps, I see no problem at all with using utility in that way. Very useful for deciding whether to drill a particular oil well or invest in a more expensive kind of ball bearing for your engine design.
But if you’ve ever actually tried to do that for, say, an important life decision, I would bet money that you ran up against problems. (My very first post on lesswrong.com concerned a software tool that I built to do exactly this. So I’ve been struggling with these issues for many years.) If you’re having trouble making a choice, it’s very likely that your certainty about your preferences is poor. Perhaps you’re able to construct the decision tree, and find that the computed “best choice” is actually highly sensitive to small changes in the utility values of the outcomes, in which case, the whole exercise was pointless, aside from the fact that it explicated why this was a hard decision, but on some level you already knew that, after all that’s why you were building a decision tree in the first place.
---
Another property of 3D space is that there is, in fact, a natural and useful definition of a norm, the 3D vector magnitude, which gives us the intuitive quantity “total distance”. I daresay physics would look very different if this weren’t the case.
“Total distance” (or vector magnitude or whatever) is both real and useful. “Real” in the sense that physics stops making sense without it. “Useful” in the sense that engineering becomes impossible without it.
My contention with “utility” is not real and only narrowly useful.
It’s not real because, again, there’s no neurological correlate for utility, there’s no introspective sense of utility, utility is a purely abstract mathematical quantity.
It’s only narrowly useful because, at best, it helps you make the “best choice” in decision problems in a sort of rigorously systematic way, such that you can show your work to a third party and have them agree that that was indeed the best choice by some pseudo-objective metric.
All of the above is uncontroversial, as far as I can tell, which makes it all the weirder when rationalists talk about “giving utility”, “standing on top of a pile of utility”, “trading utilons”, and “human utility functions”. None of those phrases make any sense, unless the speaker is using “utility” in some kind of folk terminology sense, and departing completely from the actual definition of the concept.
At the risk of repeating myself, this community takes certain problems very seriously, problems which are only actually problems if utility is the right abstraction for systematizing human wellbeing. I don’t see that it is, unless you find yourself in a situation where you can converge on a clear preference ordering with relatively good certainty.
Perhaps you’re able to construct the decision tree, and find that the computed “best choice” is actually highly sensitive to small changes in the utility values of the outcomes, in which case, the whole exercise was pointless, aside from the fact that it explicated why this was a hard decision
Are you sure that optimizing oil wells and ball bearings causes no such problems? These sound like generic problems you’d find with any sufficiently complex system, not something unique to human condition and experience.
I could argue that the abstract concept of utility is both quite real/natural and a useful abstraction, but there is nothing too disagreeable in your above comment. What bothers me is, I don’t see how adding more dimensions to utility solves any of the problems you just talked about.
If these are indeed problems that crop up with any sufficiently complex system, that’s even worse news for the idea that we can/should be using utility as the Ur-abstraction for quantifying value.
Perhaps adding more dimensions doesn’t solve anything. Perhaps all I’ve accomplished is suggesting a specific, semi-novel critique of utilitarianism. I remain unconvinced that I should push past my intuitive reservations and just swallow the Torture pill or the Repugnant Conclusion pill because the numbers say so.
That being said, every formulation of Utilitarianism that I can find depends on some sense of the “most good” and utility is a mathematical formalization of that idea. My quibble is less with the idea of doing the “most good” and more with the idea that the “most good” precisely corresponds to VNM utility.
Ur- is a prefix which strictly mean “original” but which I was using here intending more of a connotation of “fundamental”. Also I probably shouldn’t have capitalized it.
My point is that you can accept that “most good” does in fact correspond to VNM utility but reject that we want to add up this “most good” for all people and maximize the sum.
Hm. Yeah, you can accept that. You can choose to. I’m not arguing that you can’t — if you accept the axioms, then you must accept the conclusions of the axioms. I just don’t see why you would feel compelled to accept the axioms.
I feel a very strong urge to accept transitivity, others I care somewhat less about, but they seem reasonable too.
then you must accept the conclusions of the axioms
Which conclusions? To reiterate, my point is that “the Torture pill or the Repugnant Conclusion” don’t follow immediately from the existence of individual utility. They also require a demand to increase the total sum of utilities for a category of agents, which does sound vaguely good, but isn’t the only option.
I didn’t see OP explaining how preference model accuracy is increased by having more dimensions. Rather, I don’t think OP is even modeling the same thing that I’m modeling.
OP didn’t say he doesn’t have a car, from my point of view OP says that he doesn’t need a car, because a car can’t cook for him.
It means “has been conjectured, proven, talked about or etc”. Nothing fancy.
This is a weird thing to say. Multidimensionality of experience is not being questioned. The proposition that the entirety of human mental state can be meaningfully compressed to one number is stupid to the extent that I doubt any one has ever seriously suggested it in the entire human history. The problem is that OP argues against this trivially false claim, and treats it as some problem of utility. My response is that utility fails to express the entire human experience, because it is not for expressing the entire human experience. The same way that a car fails at cooking because it is not for cooking.
No, we can construct a utility function after we have verified the axioms (or just convinced ourselves that they should work). This is easier than actually ranking every possible outcome.
This is actually a flawed perspective. I guess it’s indicative of your belief that utility has no practical applications. If my preferences don’t satisfy the axioms, that only means that no utility function will describe my preferences perfectly. But some functions might approximate them and there could still be practical benefit to using them.
Well, I guess that explains something. I guess we should expand on this, but I struggle to understand why you think this, or what you think that I think.
“I don’t have a car” is exactly how I read the OP. Sibling comment seems to confirm this reading.
This is a novel claim! How do we do this? It seems manifestly false!
What does “approximate” mean, here? Let’s recall that according to the VNM theorem, if an agent’s preferences satisfy the axioms, then
In other words, for a VNM-compliant agent attempting to decide between outcomes A and B, A will be preferred to B if and only if u(A) > u(B).
If, however, the agent is VNM-noncompliant, then for any utility function u, there will exist at least one pair of outcomes A, B such that A is preferred to B, but u(A) < u(B).
This means that using the utility function as a guide to decision-making is guaranteed to violate the agent’s preferences in at least some case.
Such an agent then has two choices:
a) He can ignore his own preferences, and use the utility function as a means of decision-making; or
b) He can evaluate the utility function’s output by comparing it to his own preferences, deferring to the latter when the two conflict.
Choosing (a) seems completely unmotivated. And if the agent chooses (b), well, then what’s the point of the utility function to begin with? Just do what you prefer.
In fact, I struggle to see how “just do what you prefer” isn’t a superior strategy in any case, compared to constructing, and then following, a utility function, given that we have to elicit all of an agent’s preferences in order to construct the utility function to begin with!
And what does that leave us with, in terms of uses for a utility function? It’s not like we can do interpersonal utility comparisons (such operations are completely meaningless under VNM); which means we can’t aggregate VNM-utility across persons. So what practical benefit is there? How do we “use” a utility function, even assuming a VNM-compliant agent (quite an assumption) and assuming we can elicit all the agent’s preferences and construct the thing (another big assumption)? What do we do with it?
I didn’t mean immediately. I meant that assuming the axioms allows you to compute or at least bound the expected utility of some lotteries with respect to Eu of other lotteries.
Yes, that’s what “approximate” means, especially if B is preferred to most other possible outcomes C.
“Just do what you prefer” is awful. Preference checking is in many cases not cheap or easy. The space of possible outcomes is vast. The chances to get dutch booked are many. Your strategy can hardly be reasoned about.
Adding up people’s utilities doesn’t particularly interest me, so I don’t want to say too much, but the arguments would be pretty similar to the above. The common theme is that you fail to appreciate the value of exchanging exact correctness for computability.
How? Demonstrate, please.
If I can’t check my preferences over a pair of outcomes, then I can’t construct my utility function to begin with. You have yet to say or show anything that even approaches a rebuttal to this basic point.
Again: demonstrate. I tell you I follow the “do what I prefer” strategy. Dutch book me! I offer real money (up to $100 USD). I promise to consider any bet you offer (less those that are illegal where I live).
What difficulties do you see (that are absent if we instead construct and then use a utility function)?
Edited to add:
I don’t think you understand how fundamental the difficulty is. Interpersonal comparison, and aggregation, of VNM-utility is not hard. It is not even uncomputable. It is undefined. It does not mean anything to speak of comparing it between agents. It is, literally, mathematical nonsense, like comparing ohms to inches, or adding kilograms to QALYs. You can’t “approximate” it, or do a “not-exactly-correct” computation, or anything like that. There’s nothing to approximate in the first place!
I think you’re confusing outcomes with lotteries. To build utility I need make comparisons for unique outcomes. E.g. if I know that A < B then I no longer need to think if 0.5A + 0.5C < 0.5B + 0.5C (as per axiom of independence). You, on the other hand, need to separately evaluate every possible lottery.
I also need you to explain to me in what ways your “do what I prefer” violates the axioms, and how that works. I’m waiting for that in our other thread. To be clear, I might be unable to dutch book you, for example, if you follow the axioms in all cases except some extreme or impossible scenarios that I couldn’t possibly reproduce.
Step 1: build utility functions for several people in a group. Step 2: normalize the utilities based on the assumption that people are mostly the same (there are many ways to do it though). Step 3: maximize the sum of expected utilities. Then observe what kind of strategy you generated. Most likely you’ll find that the strategy is quite fair and reasonable to everyone. Voila, you have a decision procedure for a group of people. It’s not perfect, but it’s not terrible either. All other criticisms are pointless. The day that I find some usefulness in comparing ohms to inches, I will start comparing ohms to inches.
For example, I get the automatic guarantee that I can’t be dutch booked. Not only do you not have this guarantee, you can’t have any formal guarantees at all. Anything is possible.
Irrelevant, because that doesn’t save you from having to check whether you prefer A to C, or B to C.
What’s this?! I thought you said “The chances to get dutch booked [if one does just does what one prefers] are many”! Are they many and commonplace, or are they few, esoteric, and possibly nonexistent? Why not at least present some hypotheticals to back up your claim? Where are these chances to get Dutch booked? If they’re many, then name three!
So, in other words:
… Step 2: Do something that is completely unmotivated, baseless, and nonsensical mathematically, and, to boot, extremely questionable (to put it very mildly) intuitively and practically even if it weren’t mathematical nonsense. …
Like I said: impossible.
Of course it’s terrible. In fact it’s maximally terrible, insofar as it has no basis whatsoever in any reality, and is based wholly on a totally arbitrary normalization procedure which you made up from whole cloth and which was motivated by nothing but wanting there to be such a procedure.
You said my strategy “can hardly be reasoned about”. What difficulties in reasoning about it do you see? “No formal guarantee of not being Dutch booked” does not even begin to qualify as such a difficulty.
A procedure is terrible if it has terrible outcomes. But you didn’t mention outcomes at all. Saying that it’s “nonsensical” doesn’t convince me of anything. If it’s nonsensical but works, then it is good. Are you, by any chance, not a consequentialist?
I’m somewhat confused why this doesn’t qualify. Not even when phrased as “does not permit proving formal guarantees”? Is that not a difficulty in reasoning?
“Irrelevant” is definitely not a word you want to use here. Maybe “insufficient”? I never claimed that you would need zero comparisons, only that you’d need way fewer. By the way, if I find B < C, I no longer need to check if A < C, which is another saving.
No, it was supposed to be “The chances to get dutch booked [if one frequently exhibits preferences that violate the axioms] are many”. I have a suspicion that all of your preferences that violate the axioms happen to be ones that never influence your real choices, though I haven’t given up yet. You’re right that I should try to actually dutch book you with what I have, I’ll take some time to read your link form the other thread and maybe give it a try.
I can’t imagine what you could possibly mean by “works”, here. What does it mean to say that your procedure “works”? That it generates answers? So does pulling numbers out of a hat, or astrology. That “works”, too.
Your procedure generates answers to questions of interpersonal utility comparison. This, according to you, means that it “works”. But those questions don’t make the slightest bit of sense in the first place! And so the answers are just as meaningless.
If I have a black box that can give me yes/no answers to questions of the form “is X meters more than Y kilograms”, can I say that this box “works”? Absurd! Suppose I ask it whether 5 meters is more than 10 kilograms, and it says “yes”. What do I do with that information? What does it mean? Suppose I use the box’s output to try to maximize “total number”. What the heck am I maximizing?? It’s not a quantity that has any meaning or significance!
How is it? Why would it be? What practical problems does it present? What practical problems does it present even hypothetically (in any even remotely plausible scenario)?
Please avoid condescending language like “X is not a word you want to use”.
That aside, no, I definitely meant “irrelevant”. You said we can construct a utility function without having to rank outcomes. You’re now apparently retreating from that claim. This leaves the VNM theorem as useless in practice as I said at the start. Again, this was my contention:
And you have yet to make any sensible argument against this.
As for attempting to Dutch-book me, please, by all means, proceed!
No, my procedure is a decision procedure that answers the question “what should our group do”. It’s a very sensible question. What it means for it to “work” is debatable, but I meant that the procedure would generate decisions that generally seem fair to everyone. I’ll be condescending again—it’s very bad that you can’t figure out what sort of questions we’re trying to answer here.
Let me recap what our discussion on this topic looks like from my point of view. I said that “we can construct a utility function after we have verified the axioms”. You asked how. I understand why my first claim might have been misleading, as if the function poofed into existence with U(“eat pancakes”)=3.91 already set by itself. I immediately explain that I didn’t mean zero comparisons, I just meant fewer comparisons than you would need without the axioms (I wonder if that was misunderstood as well). You asked how. I then give a trivial example of a comparison that I don’t need to make if I used the axioms. Then you said that this is irrelevant.
Well, it’s not irrelevant, it’s a direct answer to your question and a trivial proof of my earlier claim. “Irrelevant” is not a reply I could have predicted, it took me completely by surprise. It is important to me to figure out what happened here. Presumably one (or both) of us struggles with the English language, or with basic logic, or just isn’t paying any attention. If we failed to communicate this badly on this topic, are we failing equally badly on all other topics? If we are, is there any point in continuing the discussion, or can it be fixed somehow?
By the standards you seem to be applying, a random number generator also answers that question. Here’s a procedure: for any binary decision, flip a coin. Heads yes, tails no. Does it “work”? Sure. It “works” just as well as using your VNM utility “normalization” scheme.
Your procedure doesn’t. It can’t (except by coincidence). This is because it contains a step which is purely arbitrary, and not causally linked with anyone’s preferences, sense of fairness, etc.
This is, of course, without getting into the weeds of just what on earth it means for decisions to “generally” seem “fair” to “everyone”. (Each of those scare-quoted words conceals a black morass of details, sets of potential—and potentially contradictory—operationalizations, nigh-unsolvable methodological questions, etc., etc.) But let’s bracket that.
The fact is, what you’ve done is come up with a procedure for generating answers to a certain class of difficult questions. (A procedure, note, that does not actually work for at least two reasons, but even assuming its prerequisites are satisfied…) The problem is that those answers are basically arbitrary. They don’t reflect anything like the “real” answers (i.e. they’re not consistent with what our pre-existing understand of what the answers are or should be). Your method works [well, it doesn’t actually work, but if it did work, it would do so] only because it’s useless.
If that is indeed what you meant, then your claim has been completely trivial all along, and I dearly wish you’d been clear to begin with. Fewer comparisons?! What good is that?? How much fewer? Is it still an infinite number? (yes)
I am disappointed that this discussion has turned out to be yet another instance of:
Alice: <Extraordinary, novel, truly stunning claim>!
Bob: What?! Impossible! Shocking, if true! Explain!
long discussion/argument ensues
Alice: Of course I actually meant <a version of the original claim so much weaker as to be trivial>, duh.
Bob: Damnit.
You keep repeating that, but it remains unconvincing. What I need is a specific example of a situation where my procedure would generate outcomes that we could all agree are bad.
Let’s use this for an example of what kind of argument I’m waiting for from you. Suppose you (and your group) run into lions every day. You have to compare your preferences for “run away” and “get eaten”. A coin flip is eventually going to select option 2. Everyone in your group ends up dead, even though every single one of them individually preferred to live. Every outside observer would agree that they don’t want to use this sort of decision procedure for their own group. Therefore I propose that the procedure “doesn’t work” or is “bad”.
Technically there is an infinite number of comparisons left, and also an infinite number of comparisons saved. I believe that in a practical setting this difference is not insignificant, but I don’t see an easy way to exhibit that. In part that’s because I suspect that you already save those comparisons in your practical reasoning, despite denying the axioms which permit it.
Yes, it has, so your resistance to it did seem pretty weird to me. I personally believe that my other claims are quite trivial as well, but it’s really hard to tell misunderstandings from true disagreement. What I want to do, is figure out whether this particular misunderstanding came from my failure at writing or from your failure at reading.
For starters, after reading my first post, did you think, that I think, that the utility function poofed into existence with U(“eat pancakes”)=3.91 already set by itself, after performing zero comparisons? This isn’t a charitable interpretation, but I can understand it. How did you interpret my two attempts to clarify my point in the further comments?
Hi zulupineapple,
I’d love to continue this discussion, but I’m afraid that the moderation policy on this site does not permit me to do so effectively, as you see. I’d be happy to take this to another forum (email, IRC, the comments section of my blog—whatever you prefer). If you’re interested, feel free to email me at myfirstname@myfullname.net (you could also PM me via LW’s PM system, but last time I tried using it, I couldn’t figure out how to make it work, so caveat emptor). If not, that’s fine too; in that case, I’ll have to bow out of the discussion.
Please see https://www.lesserwrong.com/posts/3mFmDMapHWHcbn7C6/a-simple-two-axis-model-of-subjective-states-with-possible/4vD2B3aG87EGJb7L5
Perhaps the following context will be useful.
In school I learned about utility in context of constructing decision problems. You rank the possible outcomes of a scenario in a preference ordering. You assign utilities to the possible outcomes, using an explicitly mushy, introspective process—unless money is involved, in which case the “mushy” step came in when you calibrated your nonlinear value-of-money function. You estimate probabilities where appropriate. You chug through the calculations of the decision tree and conclude that the best choice is the one that results probablistically in the best outcome as described by proxy as the outcome with the greatest probability-weighted utility.
That’s all good. Assuming you can actually do all of the above steps, I see no problem at all with using utility in that way. Very useful for deciding whether to drill a particular oil well or invest in a more expensive kind of ball bearing for your engine design.
But if you’ve ever actually tried to do that for, say, an important life decision, I would bet money that you ran up against problems. (My very first post on lesswrong.com concerned a software tool that I built to do exactly this. So I’ve been struggling with these issues for many years.) If you’re having trouble making a choice, it’s very likely that your certainty about your preferences is poor. Perhaps you’re able to construct the decision tree, and find that the computed “best choice” is actually highly sensitive to small changes in the utility values of the outcomes, in which case, the whole exercise was pointless, aside from the fact that it explicated why this was a hard decision, but on some level you already knew that, after all that’s why you were building a decision tree in the first place.
---
Another property of 3D space is that there is, in fact, a natural and useful definition of a norm, the 3D vector magnitude, which gives us the intuitive quantity “total distance”. I daresay physics would look very different if this weren’t the case.
“Total distance” (or vector magnitude or whatever) is both real and useful. “Real” in the sense that physics stops making sense without it. “Useful” in the sense that engineering becomes impossible without it.
My contention with “utility” is not real and only narrowly useful.
It’s not real because, again, there’s no neurological correlate for utility, there’s no introspective sense of utility, utility is a purely abstract mathematical quantity.
It’s only narrowly useful because, at best, it helps you make the “best choice” in decision problems in a sort of rigorously systematic way, such that you can show your work to a third party and have them agree that that was indeed the best choice by some pseudo-objective metric.
All of the above is uncontroversial, as far as I can tell, which makes it all the weirder when rationalists talk about “giving utility”, “standing on top of a pile of utility”, “trading utilons”, and “human utility functions”. None of those phrases make any sense, unless the speaker is using “utility” in some kind of folk terminology sense, and departing completely from the actual definition of the concept.
At the risk of repeating myself, this community takes certain problems very seriously, problems which are only actually problems if utility is the right abstraction for systematizing human wellbeing. I don’t see that it is, unless you find yourself in a situation where you can converge on a clear preference ordering with relatively good certainty.
Are you sure that optimizing oil wells and ball bearings causes no such problems? These sound like generic problems you’d find with any sufficiently complex system, not something unique to human condition and experience.
I could argue that the abstract concept of utility is both quite real/natural and a useful abstraction, but there is nothing too disagreeable in your above comment. What bothers me is, I don’t see how adding more dimensions to utility solves any of the problems you just talked about.
If these are indeed problems that crop up with any sufficiently complex system, that’s even worse news for the idea that we can/should be using utility as the Ur-abstraction for quantifying value.
Perhaps adding more dimensions doesn’t solve anything. Perhaps all I’ve accomplished is suggesting a specific, semi-novel critique of utilitarianism. I remain unconvinced that I should push past my intuitive reservations and just swallow the Torture pill or the Repugnant Conclusion pill because the numbers say so.
Maybe you’re confusing utility with utilitariansim? The two are not identical.
I’m going to be using utility until you propose something better. What’s “Ur”, by the way?
Not confused, just being lazy with language.
That being said, every formulation of Utilitarianism that I can find depends on some sense of the “most good” and utility is a mathematical formalization of that idea. My quibble is less with the idea of doing the “most good” and more with the idea that the “most good” precisely corresponds to VNM utility.
Ur- is a prefix which strictly mean “original” but which I was using here intending more of a connotation of “fundamental”. Also I probably shouldn’t have capitalized it.
My point is that you can accept that “most good” does in fact correspond to VNM utility but reject that we want to add up this “most good” for all people and maximize the sum.
Hm. Yeah, you can accept that. You can choose to. I’m not arguing that you can’t — if you accept the axioms, then you must accept the conclusions of the axioms. I just don’t see why you would feel compelled to accept the axioms.
I feel a very strong urge to accept transitivity, others I care somewhat less about, but they seem reasonable too.
Which conclusions? To reiterate, my point is that “the Torture pill or the Repugnant Conclusion” don’t follow immediately from the existence of individual utility. They also require a demand to increase the total sum of utilities for a category of agents, which does sound vaguely good, but isn’t the only option.