Yes and no. I meant Happiness to include your values. But I meant it to mean your brain states in response to the time-varying level of satisficing of your values.
Here’s two possible definitions of rationality:
maximizing your expected utility, expressed as a static function mapping circumstances into a measure according to your values
maximizing your expected Happiness, where Happiness expresses your current brain state as a function of the history of your utility
The Happiness definition of rationality has evolutionary requirements: It should always motivate a creature to increase its utility, and so it should resemble the first derivative of utility.
With this definition, maximizing utility over time means maximizing the area under your utility curve. Maximizing Happiness over a time period means maximizing the amount by which your final utility is greater than your initial utility.
So utility rationality focuses on the effects during the interval under consideration. Making a series of decisions, each of which maximizes your utility over some time period, is not guaranteed to maximize your utility over the union of those time periods. (In fact, in real life, it’s pretty much guaranteed not to.)
Happiness rationality is a heuristic that gives you nearly the same effect as evaluating your total utility from now to infinity, even if you only ever evaluate your utility over a finite time-period.
My initial reaction is that Happiness rationality is more practical for maximizing your utility in the long-term.
Which do people prefer? Or do they have some other definition of rationality?
We think we can best maximize our utility by trying to maximize our utility. Evolution is a better reasoner than us, and designed us to { maximize our utility by trying to maximize our happiness }.
That nature is (always) a better reasoner than man isn’t a credible premise, particularly these days, when the analogous unconditional superiority of the market over central planning is no longer touted uncritically.
Do you assume individual rationality’s justification is utility maximization, even if we settle for second-tier happiness in proxy? Programmed to try to maximize happiness, we act rationally when we succeed, making maximizing utility irrational or at least less rational. Utility has nothing more to recommend it when happiness is what we want.
Another way of saying this is that happiness is utility if utility is to play its role in decision theory, and what we’ve been calling utilities are biased versions of the real things.
I would be more sympathetic towards your complaints about people speaking for you if you didn’t frequently speak for others. All others.
Even if you were right, such behavior would be intolerable. And you frequently aren’t. You aren’t even rhetorically accurate, letting ‘everyone’ represent an overwhelming majority.
From now on I will downvote any comment or post of yours that puts words in my mouth, whether directly or through reference to us collectively, regardless of the remainder of the content.
When I said “which do people prefer”, I meant “Which do you prefer after considering my explanation?” Most people are using 1 because they’ve never realized that the brain is using 2. I’d be more interested in hearing what you think people should use than what they do use, and why they should use it.
I could also call Happiness rationality “hedonic rationality”. Maximizing utility leaves you with the problem of selecting the utility function. Hedonic rationality links your utility function to your evolved biological qualia.
Perhaps the most important question in philosophy is whether it makes sense to pursue non-hedonistic rationality. How do you ground your values, if not in your feelings?
I think that maybe all we are really doing, when we say we are rationally maximizing utility, is taking the integral of our happiness function and calling it our utility function. We have a built-in happiness function; we don’t have a built-in utility function. It seems too much of a coincidence to believe that we rationally came to a set of values that give us utility functions that just happen to pretty nearly be the same as we would get by deriving them from our evolved happiness functions.
Then this “hedonic rationality” is a non-reflective variety, caring for what your current values are, but not for what you’ll do or have done with your future and past values?
Do you mean, do you place a value on your future values? I don’t think you can do anything but place negative value on a change in your values. What’s an example of a rationality model that does what you’re asking?
This is true in theory, but in practice, what we think are our terminal values we can later discover are instrumental values that we abandon when they turn out not to serve what turns out to be our even-more-terminal values. Thus lots of people who used to think that homosexuality was inherently wrong feel differently when they discover that their stereotypes about gay people were mistaken.
I don’t think you can do anything but place negative value on a change in your values.
At the very least, this would seem to hold only in the extreme case that you were absolutely certain that your current values are both exhaustive and correct. I for one, am not; and I’m not sure it’s reasonable for anyone to be so certain.*
I would like generally like to value more things than I currently do. Provided they aren’t harming anybody, having more things I can find value, meaning, and fulfillment in seems like a good thing.
One of the things I want from my values is internal consistency. I’m pretty sure my current values are not internally consistent in ways I haven’t yet realized. I place positive value on changing to more consistent values.
* Unless values are supposed to be exhaustive and correct merely because you hold them—in which case, why should you care if they change? They’ll still be exhaustive and correct.
I don’t think you can do anything but place negative value on a change in your values.
My assumption is that I would not choose to change my values, unless I saw the change as an improvement. If my change in values is both voluntary and intentional, I’m certain my current self would approve, given the relevant new information.
While we’re on the subject: I, and I think MBlume, meant simply happiness, not what you’re calling Happiness.
Yes and no. I meant Happiness to include your values. But I meant it to mean your brain states in response to the time-varying level of satisficing of your values.
Here’s two possible definitions of rationality:
maximizing your expected utility, expressed as a static function mapping circumstances into a measure according to your values
maximizing your expected Happiness, where Happiness expresses your current brain state as a function of the history of your utility
The Happiness definition of rationality has evolutionary requirements: It should always motivate a creature to increase its utility, and so it should resemble the first derivative of utility.
With this definition, maximizing utility over time means maximizing the area under your utility curve. Maximizing Happiness over a time period means maximizing the amount by which your final utility is greater than your initial utility.
So utility rationality focuses on the effects during the interval under consideration. Making a series of decisions, each of which maximizes your utility over some time period, is not guaranteed to maximize your utility over the union of those time periods. (In fact, in real life, it’s pretty much guaranteed not to.)
Happiness rationality is a heuristic that gives you nearly the same effect as evaluating your total utility from now to infinity, even if you only ever evaluate your utility over a finite time-period.
My initial reaction is that Happiness rationality is more practical for maximizing your utility in the long-term.
Which do people prefer? Or do they have some other definition of rationality?
Everyone here except you is using 1.
Do you see how using 2 can better accomplish 1?
We think we can best maximize our utility by trying to maximize our utility. Evolution is a better reasoner than us, and designed us to { maximize our utility by trying to maximize our happiness }.That nature is (always) a better reasoner than man isn’t a credible premise, particularly these days, when the analogous unconditional superiority of the market over central planning is no longer touted uncritically.
Do you assume individual rationality’s justification is utility maximization, even if we settle for second-tier happiness in proxy? Programmed to try to maximize happiness, we act rationally when we succeed, making maximizing utility irrational or at least less rational. Utility has nothing more to recommend it when happiness is what we want.
Another way of saying this is that happiness is utility if utility is to play its role in decision theory, and what we’ve been calling utilities are biased versions of the real things.
I would be more sympathetic towards your complaints about people speaking for you if you didn’t frequently speak for others. All others.
Even if you were right, such behavior would be intolerable. And you frequently aren’t. You aren’t even rhetorically accurate, letting ‘everyone’ represent an overwhelming majority.
From now on I will downvote any comment or post of yours that puts words in my mouth, whether directly or through reference to us collectively, regardless of the remainder of the content.
I would be fascinated to know how many of us I speak for when I say: why don’t you just fuck off.
Not me. Please can we not descend into this sort of thing? If you think Annoyance is trolling, then don’t feed. Vote down and move on.
I bet he drew the red card.
When I said “which do people prefer”, I meant “Which do you prefer after considering my explanation?” Most people are using 1 because they’ve never realized that the brain is using 2. I’d be more interested in hearing what you think people should use than what they do use, and why they should use it.
Indeed
I could also call Happiness rationality “hedonic rationality”. Maximizing utility leaves you with the problem of selecting the utility function. Hedonic rationality links your utility function to your evolved biological qualia.
Perhaps the most important question in philosophy is whether it makes sense to pursue non-hedonistic rationality. How do you ground your values, if not in your feelings?
I think that maybe all we are really doing, when we say we are rationally maximizing utility, is taking the integral of our happiness function and calling it our utility function. We have a built-in happiness function; we don’t have a built-in utility function. It seems too much of a coincidence to believe that we rationally came to a set of values that give us utility functions that just happen to pretty nearly be the same as we would get by deriving them from our evolved happiness functions.
Then this “hedonic rationality” is a non-reflective variety, caring for what your current values are, but not for what you’ll do or have done with your future and past values?
Do you mean, do you place a value on your future values? I don’t think you can do anything but place negative value on a change in your values. What’s an example of a rationality model that does what you’re asking?
This is true in theory, but in practice, what we think are our terminal values we can later discover are instrumental values that we abandon when they turn out not to serve what turns out to be our even-more-terminal values. Thus lots of people who used to think that homosexuality was inherently wrong feel differently when they discover that their stereotypes about gay people were mistaken.
At the very least, this would seem to hold only in the extreme case that you were absolutely certain that your current values are both exhaustive and correct. I for one, am not; and I’m not sure it’s reasonable for anyone to be so certain.*
I would like generally like to value more things than I currently do. Provided they aren’t harming anybody, having more things I can find value, meaning, and fulfillment in seems like a good thing.
One of the things I want from my values is internal consistency. I’m pretty sure my current values are not internally consistent in ways I haven’t yet realized. I place positive value on changing to more consistent values.
* Unless values are supposed to be exhaustive and correct merely because you hold them—in which case, why should you care if they change? They’ll still be exhaustive and correct.
Jim Moor makes a similar case in Should We Let Computers Get Under Our Skin—I dispute it in a paper (abstract here)
The gist is, if we have self-improvement as a value, then yes, changing our values can be a positive thing even considered ahead of time.
My assumption is that I would not choose to change my values, unless I saw the change as an improvement. If my change in values is both voluntary and intentional, I’m certain my current self would approve, given the relevant new information.