If you don’t prefer truth to happiness with false beliefs...
Does it make sense to talk about preferring something over happiness? I know what you mean if we take a folk definition of happiness as something like “bubbly feelings”. But I don’t think you mean folk happiness; for this statement to have impact, it has to mean Happiness, defined to include all of your values.
I think what I’m trying to ask is: Isn’t it by definition irrational (failing to maximize your happiness) to prefer truth to happiness?
My happiness is something you can measure just by observing the state of my brain. To measure the accuracy of my beliefs, you must measure my brain and the rest of the universe, and compare the two. I place value on the accuracy of my beliefs, which means I do value something beyond my happiness
Exactly where have you gotten the idea that any of us have been using a definition of “rationality” that includes a requirement that utility supervene on brain states?
Here’s the quote from EY that I started this comment thread with:
If you don’t prefer truth to happiness with false beliefs...
Both the alternatives here are talking about brain states. EY’s ‘truth’ doesn’t mean ‘truth in the world’. The world is true by definition. He means having truth in your brain. He is trying to maximize the truth/falsehood ratio of the states within his own brain.
That’s a definition of “rationality” that includes a requirement that utility supervene on brain states.
No, as MBlume said, truth, and utility of truth, supervene on brain states and the things those brain states are about. Holding my belief about the color of the sky fixed, it is true if the sky is blue and false if the sky is green.
Also, truth and happiness are just the values being weighed in this particular case; nobody ever said they’re the only things rationalists might care about.
Most of us? Anyone besides Phil Goetz, vote this comment down if you think that it is by definition irrational to value something beyond your own experienced happiness.
(We really need a simple way to include small polls into blog comments!)
Utility = a function mapping your state into its desirability, based on your values.
Happiness = a time-varying function mapping utility over time into your satisfaction with your utility.
Rationality = maximizing your expected Happiness
So I think you’re saying that you want to define rationality as maximizing your expected utility, not your expected happiness. It’s a significant difference, and I would like to know which people prefer (or if they have some other definition). But it doesn’t matter WRT the comment I made here. You’re still being a Nietzschian if you elevate Truth beyond your utility function.
a time-varying function mapping utility over time into your satisfaction with your utility
I can’t make any sense of this. I value happiness-the-brain-state, which means I value satisfaction with my situation in life. That is part of my utility function. The “life-states” are mere inputs, they don’t exhaust the definition of “utility”. If I can predict that a year after winning the lottery I won’t be any happier than I am now, that bears directly on the expected utility of winning.
You say you’re talking about “Happiness, defined to include all of your values”, but the original mention of preferring truth to happiness had this for context: “I have been heartbroken, miserable, unfocused, and extremely ineffective since”. This is surely talking about psychological happiness, not overall “value”. Why such confusing terminology?
I’m afraid I’m still utterly confused by your usage. It seems to me that you’re trying to draw two separate distinctions when you contrast happiness and utility. One is a distinction between brain states and other things we might choose to value; the other is a distinction between an instantaneous measure and a measure aggregated in some way over time.
Does this seem right to you, or am I completely missing the point? (If it does seem right, do you see how trying to do both of these with a single shift in terminology might not be the best way of proceeding? In particular, it manages to leave us with no words for the aggregate-of-value-over-time; or for the instantaneous-experience-of-particular-brain-states.)
I am also somewhat confused by your viewing the brain states (Happiness) as functions of utility. We can clearly value more than just states of our brain, so it seems far more natural to me to view value as a function of brain states + other stuff, rather than the other way around.
I’m afraid I’m still utterly confused by your usage. It seems to me that you’re trying to draw two separate distinctions when you contrast happiness and utility. One is a distinction between brain states and other things we might choose to value; the other is a distinction between an instantaneous measure and a measure aggregated in some way over time.
Yes. I don’t think introducing these distinctions one at a time would give you any additional useful concepts. The integral of utility over time serves as the aggregate of value over time. It only fails to do so when we talk about happiness because happiness is more sensitive to changes in utility than to utility.
Happiness does give you an instantaneous measure; it just depends on the history. When I talk about maximizing happiness, I mean maximizing the integral of happiness over time. This works out to be the same as maximizing the increase in utility over time, for reasonable definitions of happiness; see my comment above in response to EY.
We can clearly value more than just states of our brain
I think the distinction is
‘maximize utility’ = non-hedonic rationalism
‘maximize happiness’ = hedonic rationalism
I understand that there’s a lot of sympathy for non-hedonic rationalism. But, in the long run, it probably relies on irrational, Nietzschian value-creation.
Hedonic rationalism is in danger of being circular once we can re-write our happiness functions. But this is probably completely isomorphic to the symbol-grounding problem, so we have to address this problem anyway.
I don’t think introducing these distinctions one at a time would give you any additional useful concepts.
Then there are a lot of economists (and psychologists) who disagree with you, and routinely use these concepts you don’t think are useful for apparently useful purposes.
I was a little careless; and you are taking my statement out of context and overgeneralizing it. These two distinctions are both needed to find the answer I am proposing. They can be used successfully in other context, or probably within the same context to address different questions.
Yes and no. I meant Happiness to include your values. But I meant it to mean your brain states in response to the time-varying level of satisficing of your values.
Here’s two possible definitions of rationality:
maximizing your expected utility, expressed as a static function mapping circumstances into a measure according to your values
maximizing your expected Happiness, where Happiness expresses your current brain state as a function of the history of your utility
The Happiness definition of rationality has evolutionary requirements: It should always motivate a creature to increase its utility, and so it should resemble the first derivative of utility.
With this definition, maximizing utility over time means maximizing the area under your utility curve. Maximizing Happiness over a time period means maximizing the amount by which your final utility is greater than your initial utility.
So utility rationality focuses on the effects during the interval under consideration. Making a series of decisions, each of which maximizes your utility over some time period, is not guaranteed to maximize your utility over the union of those time periods. (In fact, in real life, it’s pretty much guaranteed not to.)
Happiness rationality is a heuristic that gives you nearly the same effect as evaluating your total utility from now to infinity, even if you only ever evaluate your utility over a finite time-period.
My initial reaction is that Happiness rationality is more practical for maximizing your utility in the long-term.
Which do people prefer? Or do they have some other definition of rationality?
We think we can best maximize our utility by trying to maximize our utility. Evolution is a better reasoner than us, and designed us to { maximize our utility by trying to maximize our happiness }.
That nature is (always) a better reasoner than man isn’t a credible premise, particularly these days, when the analogous unconditional superiority of the market over central planning is no longer touted uncritically.
Do you assume individual rationality’s justification is utility maximization, even if we settle for second-tier happiness in proxy? Programmed to try to maximize happiness, we act rationally when we succeed, making maximizing utility irrational or at least less rational. Utility has nothing more to recommend it when happiness is what we want.
Another way of saying this is that happiness is utility if utility is to play its role in decision theory, and what we’ve been calling utilities are biased versions of the real things.
I would be more sympathetic towards your complaints about people speaking for you if you didn’t frequently speak for others. All others.
Even if you were right, such behavior would be intolerable. And you frequently aren’t. You aren’t even rhetorically accurate, letting ‘everyone’ represent an overwhelming majority.
From now on I will downvote any comment or post of yours that puts words in my mouth, whether directly or through reference to us collectively, regardless of the remainder of the content.
When I said “which do people prefer”, I meant “Which do you prefer after considering my explanation?” Most people are using 1 because they’ve never realized that the brain is using 2. I’d be more interested in hearing what you think people should use than what they do use, and why they should use it.
I could also call Happiness rationality “hedonic rationality”. Maximizing utility leaves you with the problem of selecting the utility function. Hedonic rationality links your utility function to your evolved biological qualia.
Perhaps the most important question in philosophy is whether it makes sense to pursue non-hedonistic rationality. How do you ground your values, if not in your feelings?
I think that maybe all we are really doing, when we say we are rationally maximizing utility, is taking the integral of our happiness function and calling it our utility function. We have a built-in happiness function; we don’t have a built-in utility function. It seems too much of a coincidence to believe that we rationally came to a set of values that give us utility functions that just happen to pretty nearly be the same as we would get by deriving them from our evolved happiness functions.
Then this “hedonic rationality” is a non-reflective variety, caring for what your current values are, but not for what you’ll do or have done with your future and past values?
Do you mean, do you place a value on your future values? I don’t think you can do anything but place negative value on a change in your values. What’s an example of a rationality model that does what you’re asking?
This is true in theory, but in practice, what we think are our terminal values we can later discover are instrumental values that we abandon when they turn out not to serve what turns out to be our even-more-terminal values. Thus lots of people who used to think that homosexuality was inherently wrong feel differently when they discover that their stereotypes about gay people were mistaken.
I don’t think you can do anything but place negative value on a change in your values.
At the very least, this would seem to hold only in the extreme case that you were absolutely certain that your current values are both exhaustive and correct. I for one, am not; and I’m not sure it’s reasonable for anyone to be so certain.*
I would like generally like to value more things than I currently do. Provided they aren’t harming anybody, having more things I can find value, meaning, and fulfillment in seems like a good thing.
One of the things I want from my values is internal consistency. I’m pretty sure my current values are not internally consistent in ways I haven’t yet realized. I place positive value on changing to more consistent values.
* Unless values are supposed to be exhaustive and correct merely because you hold them—in which case, why should you care if they change? They’ll still be exhaustive and correct.
I don’t think you can do anything but place negative value on a change in your values.
My assumption is that I would not choose to change my values, unless I saw the change as an improvement. If my change in values is both voluntary and intentional, I’m certain my current self would approve, given the relevant new information.
IAWYC, but am confused by the phrase
Does it make sense to talk about preferring something over happiness? I know what you mean if we take a folk definition of happiness as something like “bubbly feelings”. But I don’t think you mean folk happiness; for this statement to have impact, it has to mean Happiness, defined to include all of your values.
I think what I’m trying to ask is: Isn’t it by definition irrational (failing to maximize your happiness) to prefer truth to happiness?
My happiness is something you can measure just by observing the state of my brain. To measure the accuracy of my beliefs, you must measure my brain and the rest of the universe, and compare the two. I place value on the accuracy of my beliefs, which means I do value something beyond my happiness
Sure, But that is, by the definition of rationality that I think most of us have been using, irrational.
Exactly where have you gotten the idea that any of us have been using a definition of “rationality” that includes a requirement that utility supervene on brain states?
Here’s the quote from EY that I started this comment thread with:
Both the alternatives here are talking about brain states. EY’s ‘truth’ doesn’t mean ‘truth in the world’. The world is true by definition. He means having truth in your brain. He is trying to maximize the truth/falsehood ratio of the states within his own brain.
That’s a definition of “rationality” that includes a requirement that utility supervene on brain states.
No, as MBlume said, truth, and utility of truth, supervene on brain states and the things those brain states are about. Holding my belief about the color of the sky fixed, it is true if the sky is blue and false if the sky is green.
Also, truth and happiness are just the values being weighed in this particular case; nobody ever said they’re the only things rationalists might care about.
Most of us? Anyone besides Phil Goetz, vote this comment down if you think that it is by definition irrational to value something beyond your own experienced happiness.
(We really need a simple way to include small polls into blog comments!)
I’ve already said, in this very thread, that I’m talking about
Now you just wrote
I realize that I introduced confusion with my unclear definitions.
My terms, to review a a recent discussion:
Utility = a function mapping your state into its desirability, based on your values.
Happiness = a time-varying function mapping utility over time into your satisfaction with your utility.
Rationality = maximizing your expected Happiness
So I think you’re saying that you want to define rationality as maximizing your expected utility, not your expected happiness. It’s a significant difference, and I would like to know which people prefer (or if they have some other definition). But it doesn’t matter WRT the comment I made here. You’re still being a Nietzschian if you elevate Truth beyond your utility function.
I can’t make any sense of this. I value happiness-the-brain-state, which means I value satisfaction with my situation in life. That is part of my utility function. The “life-states” are mere inputs, they don’t exhaust the definition of “utility”. If I can predict that a year after winning the lottery I won’t be any happier than I am now, that bears directly on the expected utility of winning.
You say you’re talking about “Happiness, defined to include all of your values”, but the original mention of preferring truth to happiness had this for context: “I have been heartbroken, miserable, unfocused, and extremely ineffective since”. This is surely talking about psychological happiness, not overall “value”. Why such confusing terminology?
I’m afraid I’m still utterly confused by your usage. It seems to me that you’re trying to draw two separate distinctions when you contrast happiness and utility. One is a distinction between brain states and other things we might choose to value; the other is a distinction between an instantaneous measure and a measure aggregated in some way over time.
Does this seem right to you, or am I completely missing the point? (If it does seem right, do you see how trying to do both of these with a single shift in terminology might not be the best way of proceeding? In particular, it manages to leave us with no words for the aggregate-of-value-over-time; or for the instantaneous-experience-of-particular-brain-states.)
I am also somewhat confused by your viewing the brain states (Happiness) as functions of utility. We can clearly value more than just states of our brain, so it seems far more natural to me to view value as a function of brain states + other stuff, rather than the other way around.
Yes. I don’t think introducing these distinctions one at a time would give you any additional useful concepts. The integral of utility over time serves as the aggregate of value over time. It only fails to do so when we talk about happiness because happiness is more sensitive to changes in utility than to utility.
Happiness does give you an instantaneous measure; it just depends on the history. When I talk about maximizing happiness, I mean maximizing the integral of happiness over time. This works out to be the same as maximizing the increase in utility over time, for reasonable definitions of happiness; see my comment above in response to EY.
I think the distinction is
‘maximize utility’ = non-hedonic rationalism
‘maximize happiness’ = hedonic rationalism
I understand that there’s a lot of sympathy for non-hedonic rationalism. But, in the long run, it probably relies on irrational, Nietzschian value-creation.
Hedonic rationalism is in danger of being circular once we can re-write our happiness functions. But this is probably completely isomorphic to the symbol-grounding problem, so we have to address this problem anyway.
Then there are a lot of economists (and psychologists) who disagree with you, and routinely use these concepts you don’t think are useful for apparently useful purposes.
I was a little careless; and you are taking my statement out of context and overgeneralizing it. These two distinctions are both needed to find the answer I am proposing. They can be used successfully in other context, or probably within the same context to address different questions.
Why do you think any of us have been using a definition of rationality that includes a requirement that utility supervene on brain states?
While we’re on the subject: I, and I think MBlume, meant simply happiness, not what you’re calling Happiness.
Yes and no. I meant Happiness to include your values. But I meant it to mean your brain states in response to the time-varying level of satisficing of your values.
Here’s two possible definitions of rationality:
maximizing your expected utility, expressed as a static function mapping circumstances into a measure according to your values
maximizing your expected Happiness, where Happiness expresses your current brain state as a function of the history of your utility
The Happiness definition of rationality has evolutionary requirements: It should always motivate a creature to increase its utility, and so it should resemble the first derivative of utility.
With this definition, maximizing utility over time means maximizing the area under your utility curve. Maximizing Happiness over a time period means maximizing the amount by which your final utility is greater than your initial utility.
So utility rationality focuses on the effects during the interval under consideration. Making a series of decisions, each of which maximizes your utility over some time period, is not guaranteed to maximize your utility over the union of those time periods. (In fact, in real life, it’s pretty much guaranteed not to.)
Happiness rationality is a heuristic that gives you nearly the same effect as evaluating your total utility from now to infinity, even if you only ever evaluate your utility over a finite time-period.
My initial reaction is that Happiness rationality is more practical for maximizing your utility in the long-term.
Which do people prefer? Or do they have some other definition of rationality?
Everyone here except you is using 1.
Do you see how using 2 can better accomplish 1?
We think we can best maximize our utility by trying to maximize our utility. Evolution is a better reasoner than us, and designed us to { maximize our utility by trying to maximize our happiness }.That nature is (always) a better reasoner than man isn’t a credible premise, particularly these days, when the analogous unconditional superiority of the market over central planning is no longer touted uncritically.
Do you assume individual rationality’s justification is utility maximization, even if we settle for second-tier happiness in proxy? Programmed to try to maximize happiness, we act rationally when we succeed, making maximizing utility irrational or at least less rational. Utility has nothing more to recommend it when happiness is what we want.
Another way of saying this is that happiness is utility if utility is to play its role in decision theory, and what we’ve been calling utilities are biased versions of the real things.
I would be more sympathetic towards your complaints about people speaking for you if you didn’t frequently speak for others. All others.
Even if you were right, such behavior would be intolerable. And you frequently aren’t. You aren’t even rhetorically accurate, letting ‘everyone’ represent an overwhelming majority.
From now on I will downvote any comment or post of yours that puts words in my mouth, whether directly or through reference to us collectively, regardless of the remainder of the content.
I would be fascinated to know how many of us I speak for when I say: why don’t you just fuck off.
Not me. Please can we not descend into this sort of thing? If you think Annoyance is trolling, then don’t feed. Vote down and move on.
I bet he drew the red card.
When I said “which do people prefer”, I meant “Which do you prefer after considering my explanation?” Most people are using 1 because they’ve never realized that the brain is using 2. I’d be more interested in hearing what you think people should use than what they do use, and why they should use it.
Indeed
I could also call Happiness rationality “hedonic rationality”. Maximizing utility leaves you with the problem of selecting the utility function. Hedonic rationality links your utility function to your evolved biological qualia.
Perhaps the most important question in philosophy is whether it makes sense to pursue non-hedonistic rationality. How do you ground your values, if not in your feelings?
I think that maybe all we are really doing, when we say we are rationally maximizing utility, is taking the integral of our happiness function and calling it our utility function. We have a built-in happiness function; we don’t have a built-in utility function. It seems too much of a coincidence to believe that we rationally came to a set of values that give us utility functions that just happen to pretty nearly be the same as we would get by deriving them from our evolved happiness functions.
Then this “hedonic rationality” is a non-reflective variety, caring for what your current values are, but not for what you’ll do or have done with your future and past values?
Do you mean, do you place a value on your future values? I don’t think you can do anything but place negative value on a change in your values. What’s an example of a rationality model that does what you’re asking?
This is true in theory, but in practice, what we think are our terminal values we can later discover are instrumental values that we abandon when they turn out not to serve what turns out to be our even-more-terminal values. Thus lots of people who used to think that homosexuality was inherently wrong feel differently when they discover that their stereotypes about gay people were mistaken.
At the very least, this would seem to hold only in the extreme case that you were absolutely certain that your current values are both exhaustive and correct. I for one, am not; and I’m not sure it’s reasonable for anyone to be so certain.*
I would like generally like to value more things than I currently do. Provided they aren’t harming anybody, having more things I can find value, meaning, and fulfillment in seems like a good thing.
One of the things I want from my values is internal consistency. I’m pretty sure my current values are not internally consistent in ways I haven’t yet realized. I place positive value on changing to more consistent values.
* Unless values are supposed to be exhaustive and correct merely because you hold them—in which case, why should you care if they change? They’ll still be exhaustive and correct.
Jim Moor makes a similar case in Should We Let Computers Get Under Our Skin—I dispute it in a paper (abstract here)
The gist is, if we have self-improvement as a value, then yes, changing our values can be a positive thing even considered ahead of time.
My assumption is that I would not choose to change my values, unless I saw the change as an improvement. If my change in values is both voluntary and intentional, I’m certain my current self would approve, given the relevant new information.