How Not to be Stupid: Know What You Want, What You Really Really Want

Previously: Starting Up

So, you want to be rational, huh? You want to be Less Wrong than you were before, hrmmm? First you must pass through the posting titles of a thousand groans. Muhahahahaha!

Let’s start with the idea of preference rankings. If you prefer A to B, well, given the choice between A and B, you’d choose A.

For example, if you face a choice between a random child being tortured to death vs them leading a happy and healthy life, all else being equal and the choice costing you nothing, which do you choose?

This isn’t a trick question. If you’re a perfectly ordinary human, you presumably prefer the latter to the former.

Therefore you choose it. That’s what it means to prefer something. That if you prefer A over B, you’d give up situation B to gain situation A. You want situation A more than you want situation B.

Now, if there’re many possibilities, you may ask… “But, what if I prefer B to A, C to B, and A to C?”

The answer, of course, is that you’re a bit confused about what you actually prefer. I mean, all that ranking would do is just keep you switching between those, looping around.

And if thinking in terms of resources, the universe or an opponent or whatever could, for a small price, sell each of those to you in sequence, draining you of the resource (time, money, whatever) as you go around the vortex of confused desires.

This, of course, translates more precisely into a sequence of states, Ai, Bi, Ci, and preferences of the form A0 < B1 < C2 < A3 < B4 … where each one of those is the same as the original name except you also have a drop less of the relevant resource as you did before. ie, indicating a willingness to pay the price. If the sequence keeps going all the way, then you’ll be drained, and that’s a rather inefficient way of going about it if you just want to give the relevant resource up, no? ;)

Still, a strict loop, A > B, B > C, C > A really is an indication that you just don’t know what you want. I’ll just dismiss that at this point as “not really what I’d call preferences” as such.

Note, however, that it’s perfectly okay to have some states of reality, histories of the entire universe, whatever, such that A, B, and C are all ranked equally in your preferences.

If you, however, say something like “I don’t prefer A less than B, nor more than B, nor equally to B”, I’m just going to give you a very stern look until you realize you’re rather confused. (note, ranking two things equally doesn’t mean you are incapable of distinguishing them. Also, what you want may be a function of multiple variables that may end up translate to something like “in this instance I want X, though in that other instance I would have wanted Y.” This is perfectly acceptable as long as the overal ranking properties (and other rules) are being followed. That is, as long as you’re Not Being Stupid.)

Let’s suppose there’re two states A and B that for you fall under this relative preference nonpreference zone. Let’s further suppose that somehow the universe ends up presenting you with a situation in which you have to choose between them.

What do you do? When it actually comes down to it, so that your options are “choose A, choose B, or something else does the deciding.” (either coin flip, or someone else who’s willing to choose between them, or basically some something other than you.)

If you can say “if pressed, I’d have to choose… A”, then in the end, you have ranked one above the other. If you choose option 3, then basically you’re saying “I know it’s going to be one or the other, but I don’t want to be the one making that choice.” Which could be interpreted as either indifferent or at least _sufficiently_ indifferent that the (emotional or whatever) cost to you of you yourself directly making that choice is much greater.

At that point, if you say to me “nope, I still neither prefer A to B, prefer B to A, nor am indifferent to the choice. It’s simply not meaningful for my preferences to state any relative ranking, even equal”, well, I would be at that point rather confused as to what it is that you even meant by that statement. If in the above situation you would actually choose one of A or B, then clearly you have a relative ranking for them. If you went by the third option, and state that you’re not indifferent to them, but prefer neither to the other, well, I honestly don’t know what you would mean then. It at least seems to me at this point that such thought would be more a confusion than anything else. Or, at least, that at that point it isn’t even what I think I or most other people mean by “preferences.” So I’m just going to declare this as the “Hey everyone, look, here’s the weakest point I think I can find so far, even though it doesn’t seem like all that weak a weak point to me.”

So, for now, I’m going to move on and assume that preferences will be of the form A < B < C < D, E, F, G < H, I < J < K (assuming all states are comparable, “Don’t Be Stupid” does actually seem to imply rejection of cycles.)

For convinience, let’s introduce a notion of numerically representing these rankings. The rule simply is this: If you rank two things the same, assign them the same real number. If you rank something B higher than A, then assign B a higher number than A. (Why real numbers? Well, we’ve got an ordering here. Complex numbers aren’t going to be helping at all, so real numbers are perhaps the most general useful way of doing this.)

For any particular preference ranking, there’s obviously many valid ways of numerically representing it given the above rules. Further, one can always use a strictly increasing function to translate between any of those. And there will be an inverse, so you can translate back to your prefered encoding.

(A strictly increasing function is, well, exactly what it sounds like. If x > y, f(x) > f(y). Try to visualize this. It never changes direction, never doubles back on itself. So there’s always an inverse, for every output, there’s always a unique input. So later on. when I start focusing on indexings of the preferences that has specific mathematical properties, no generality is lost. One can always translate into another numerical coding for the preferences, and then back again.)

A few words of warning though: While this preference ranking thing is the ideal, any simple rule for generating the ranking is not going to reproduce your preferences, your morality, your desires. Your preferences are complex. Best to instead figure out what you want in specific cases. In conflicting decisions, query yourself, see which deeper principles “seem right”, and extrapolate from there. But any simple rule for generating your own One True Preference Ranking is simply going to be wrong. (Don’t worry about what a “utility function” is exactly yet. I’ll get to that later. For now, all you need to know is that it’s one of those numerical encodings of preferences that has certain useful mathematical properties.)

(EDIT: added in the example of how lack of having a single ranking for all preferences can lead to Being Stupid)

(EDIT2: (4/​29/​2009) okay, so I was wrong thinking that I’ve shown “don’t be stupid” (in the sense used in this sequence) prohibits uncomparable states. (That is, preference functions that can, when input two states, output “invalid pair” rather than “>” “<” or “=”. I’ve removed that argument and replaced it with a discussion that I think gets more to the heart of that matter.))