We need to look at it purely in terms of numbers if we are rationalists, or let us say “ratio-ists”. Is your argument really that numeric analysis is the wrong thing to do?
We need to look at it purely in terms of numbers, only if we assume that we’re maximizing hedons (or whatever Omega will double). But why should we assume that?
Let’s go back to the beginning of this problem. Suppose for simplicity’s sake we choose only between playing once, and playing until we die (these two alternatives were the ones discussed the most). In the latter case we die with very high probability, quite soon. Now I, personally, prefer in such a case not to play at all. Why? Well, I just do—it’s fundamental to my desires not to want to die in an hour no matter what the gain in happiness during that hour.
This is how I’d actually behave, and I assume many other people as well. I don’t have to explain this fact by inventing a utility function that is maximized by not playing. Even if I don’t understand myself why I’d choose this, I’m very sure that I would.
Utilons and hedons are models that are supposed to help explain human behavior, but if they don’t fit it, it’s the models that are wrong. (This is related to the fact that I’m not sure anymore what utilons are exactly, as per my comment above.)
If we were designing a new system to achieve a goal, or even modifying humans towards a given goal, then it might be best to build maximizers of something. But if we’re analyzing actual human behavior, which is how the thread about Omega’s game got started, there’s no reason to assume that humans maximize anything. If we insist on defining human behavior as maximizing hedons (and/or utilons), it follows that hedons do not behave numerically, and so are quite confusing.
|there’s no reason to assume that humans maximize anything. If we insist on defining human behavior as maximizing hedons (and/or utilons), it follows that hedons do not behave numerically, and so are quite confusing.
In theory, any behavior can be described as a maximization of some function. The question is when this is useful and when it isn’t.
It seems to me that we’re talking about both things in this thread. But I’m pretty sure this post is about analyzing human behavior… Why else does it give examples of human behavior as anecdotal proof of certain models?
I understand that utilons arise from discussions of rational goal-seeking behavior. I still think that they don’t necessarily apply to human (arational) behavior.
I think we’re doing both, and for good reason. Modeling rational behavior and actual behavior are both useful. You are right to point out that confusion about what we are modeling is rampant here though.
We need to look at it purely in terms of numbers, only if we assume that we’re maximizing hedons (or whatever Omega will double). But why should we assume that?
Let’s go back to the beginning of this problem. Suppose for simplicity’s sake we choose only between playing once, and playing until we die (these two alternatives were the ones discussed the most). In the latter case we die with very high probability, quite soon. Now I, personally, prefer in such a case not to play at all. Why? Well, I just do—it’s fundamental to my desires not to want to die in an hour no matter what the gain in happiness during that hour.
This is how I’d actually behave, and I assume many other people as well. I don’t have to explain this fact by inventing a utility function that is maximized by not playing. Even if I don’t understand myself why I’d choose this, I’m very sure that I would.
Utilons and hedons are models that are supposed to help explain human behavior, but if they don’t fit it, it’s the models that are wrong. (This is related to the fact that I’m not sure anymore what utilons are exactly, as per my comment above.)
If we were designing a new system to achieve a goal, or even modifying humans towards a given goal, then it might be best to build maximizers of something. But if we’re analyzing actual human behavior, which is how the thread about Omega’s game got started, there’s no reason to assume that humans maximize anything. If we insist on defining human behavior as maximizing hedons (and/or utilons), it follows that hedons do not behave numerically, and so are quite confusing.
|there’s no reason to assume that humans maximize anything. If we insist on defining human behavior as maximizing hedons (and/or utilons), it follows that hedons do not behave numerically, and so are quite confusing.
In theory, any behavior can be described as a maximization of some function. The question is when this is useful and when it isn’t.
We’re modeling rational behavior, not human behavior.
It seems to me that we’re talking about both things in this thread. But I’m pretty sure this post is about analyzing human behavior… Why else does it give examples of human behavior as anecdotal proof of certain models?
I understand that utilons arise from discussions of rational goal-seeking behavior. I still think that they don’t necessarily apply to human (arational) behavior.
I think we’re doing both, and for good reason. Modeling rational behavior and actual behavior are both useful. You are right to point out that confusion about what we are modeling is rampant here though.