Psychohistorian disagrees that cousin_it was disagreeing with him.
Very cute ;)
Psychohistorian disagrees that cousin_it was disagreeing with him.
Very cute ;)
I’m not sure it’s a status-lowering act. I know it intuitively seem so—I think if you’re only ever asking, and not ever contributing, then yeah. But not if you contribute a lot too.
I’m just done. (I think I’m being stupid above.) Thanks.
Thanks for the link. Of course I should have checked that....
I’d like to point out that you find this in the second paragraph: “For an eudaemonist, happiness has intrinsic value”
Given the rest of what you’ve said, and my attachment to happiness as self-evidently valuable, a broader conception of “happiness” (as in eudamonia above) may avoid adverse outcomes like wireheading (assuming it is one). As other commenters here have noted there is no single definition anyway. You might say the broader it becomes, the less useful. Sure, but any measure would probably have to be really broad—like “utility”. When I said I don’t think ‘intrinsic worth’ is a thing, it’s because I was identifying it with utility, and… I guess I wasn’t thinking of (overall) utility as a ‘thing’ because to me, the concept is really vague and I just think of it an aggregate. An aggregate of things like happiness that contribute to utility.
I mentioned how if you’re going to call anything a terminal value, happiness seems like a good one. Now I don’t think so: you seem to be saying that you shouldn’t (edit: aren’t justified in considering) anything a terminal value other than utility itself, which seems reasonable. Is that right?
More to the point:
So why does ‘happiness’ get a free pass from this inspection?
I’m not sure; it now seems to me it oughtn’t to. Maybe another Less Wronger can contribute more, though not me.
Good point. And I think I’ll have to exit too, because I have the feeling that I’m doing something wrong, and I’m frankly too tired to figure that out right now.
Just one question. “Declaring a thing to be another thing does not make it that thing. Brute fiat is insufficient to normative evaluations of intrinsic worth.” Among other things I may be confused about, I’m (still) confused what intrinsic worth might be. Since I don’t (currently) think ‘intrinsic worth’ is a thing, it seems to me that it is just the nature of a terminal value that it’s something you choose, so I don’t see the violation.
EDIT: Edited statement edited out.
Yo dawg...
I think you’re still missing the point. You can call happiness a terminal value, because you decide what those are.
I think you are confused here; what do you mean by inherently ‘good’? Why must more of X always be better than less for? Does this resolve it?: yes, happiness =/= utility. I never claimed it was and I don’t think anyone did. But among all the things that, when aggregated = your ‘utility function’, happiness (for most people) is a pretty big thing. And it is aggregate utility, and only aggregate utility, that would always be better with more.
Then, I suppose happiness isn’t a terminal value. I think I was wrong. The only “terminal” value would be total utility.… but happiness is so fundamental and “self-evidently valuable” to most people that it seems useful to call such thing a “terminal value” if indeed you’re going to call anything that.
P.S. I think you think you’re saying something meaningful when you say “useful”. I don’t think you are. I think you’re just expressing the idea of an aggregate utility, the sum. If not, what do you mean?
EDIT: This threw me for a loop: “I know this to be false of happiness/pleasure; I know that there is such a thing as “too happy”.” Obviously, if happiness is a terminal value, you’re right you can’t be too happy. I think I’m either confused or tired or both. And if it so happens that in reality people don’t desire, upon reflection, to maximize happiness because there’s a point at which it’s bad, then I understand you; such a person would be inconsistent to call happiness a terminal value in such a case.
Why do you think you can have too much happiness? (Think of some situation.)
Presumably there’s some trade-off.....
Now consider someone else in that same situation. Would someone else also think they have too much happiness in that situation? Because if that’s not the case, you just have different terminal values.
Ultimately, someone may judge their happiness to be most important to them. You can say, ‘no, they should care more about (whatever tradeoff you think there is, maybe decreased ambition)’; they can simply respond, ‘no, I disagree, I don’t care more about that, I care more about happiness’, because that’s what it means for a value to be “terminal”: it means for that value to the ultimate basis of judgment.
To make it clearer: you seem to think that different people’s terminal values must be the same. Why do you think this?
Might I recommend following Zaine’s description with this paragraph from Luke’s How To Be Happy:
IgnoreWe all want to be happy, as apparently you don’t, but continue with and happiness is useful for other things, too.2 For example, happiness improves physical health,3 improves creativity,4 and even enables you to make better decisions.5 (It’s harder to be rational when you’re unhappy.6)
EDIT: note that the formatting is off in copy and paste. Comments do not support subscripts. numbers 2-6 refer to notes in the article
(Apologies for my earlier comment. I’ve been in a negative emotional funk lately, and at least I am more self-aware because of that comment and its response. Anyway---)
I’m a little confused about why you think it’s hand-waving or question-begging to call happiness a ‘terminal value’. Here’s why.
Your utility function is just a representation of what you want to increase. So if happiness is what you want to increase, it ought to be represented (heavily) in your utility function, whatever that is. As far as I know, “utility” is a (philosophical) explication—a more precise and formalized way of talking about something. This is a concept I learned from some metalogic: logic is an “explication”, see, that hopes to improve on intuitive reasoning based on intuitively valid inferences. Logical tautologies like P-->P aren’t themselves considered to be true, they’re only considered to represent truths, like ‘if snow is white then snow is white’, all of which is supposed to remind you that just because you dress something up in formalism doesn’t mean you don’t have to refer back to what you’re trying to represent in the first place, which may be mundane in the sense of ‘everyday’.
So of course you can call happiness a terminal value. Your values aren’t constructed by your utility function; they’re described by it. In my opinion you’re taking things backward. Or, if maximal utility doesn’t include increased happiness, you’re doing it wrong, assuming happiness is what you value.
[Note: was edited.]
This is a really stupid question, but I don’t grok the distinction between ‘learning’ and ‘self-modification’ - do you get it?
Emotion dressed up as pseudo-intellectualism. How do I know that/ Because the answer is so supremely obvious.
Others—happiness terminal value You - (apparently) maximizing potential terminal value
… What exactly is so baffling? People want to be happy for it’s own sake. You can say “cuz it feels good” or whatever, but in the end you’re going to be faced with the fact that it’s just a terminal value and you’re going to have a hard time “explaining” it.
P.S. The specific use of the word contemptible is what tipped me off to the fact that you’re not emotionally prepared to ask a good question here.
Anything can be included in rationality after you realize it needs to be.
Or: You can always define your utility function to include everything relevant, but in real life estimations of utility, some things just don’t occur to us (at least until later). So sure, increased accuracy [to social detriment] is not rationality. Once you realize it.* But you need to realize it. I think HungryTurtle is helping us realize it.
So I think the real question is does your current model of rationality, the way you think about it right now and actually (hopefully) use it, is that inoptimal?*
TheOtherDave is being clear. There are obviously two considerations—right? The comparative benefit of improving two skillsets (take into account comparative advantage!) -and- The comparative cost of improving two skillsets Conceptually easy.
Awesome, big thanks!
Tversky and Kahneman, hogwash? What? Can you explain? Or just mention something?
You might still love this community, if you stick around, given your intellectual openness. And you have a good point about the accidental inventions. However, my point about theory—well, it’s so basic that it can’t really be denied. The transistor may have been invented by accident, but if the scientists didn’t have theories about how things worked, they couldn’t possibly have messed around with things in the right away to come up with accidental inventions on top of purposeful inventions. Like I said, if you truly had no theories, you mineswell stick your cat on top of your computer tower to make a transistor.
And I’m still puzzled about your response to Swimmer963′s comment. Do you really think that if a theory, that made no sense at all to you, but nevertheless made many successful predictions and was even the basis of a new technology, you still wouldn’t believe it? Because, if that’s so, then you’re just stupid. Your comments indicate you’re not actually that stupid. That’s where I got the “you take intuition as the basis for accepting belief” comment, because your reply to her (I think Swimmer is female and has written posts on her) indicates that you do in fact take your intuition—“but that just can’t be”—over empirical demonstration.
One way of answering might be to say that there is no separate “belief” that beliefs should be grounded. But i’m not sure.
All I know is that the question annoys me, but I can’t quite put my finger on it. It reminds me of questions like (1) the accusation that you can’t justify the use of logic logically, or (2) the accusation that tolerance is actually intolerant—because it’s intolerant of intolerance. There might be a level distinction that needs to be made here, as in (2) - and maybe in (1) though I think that’s different.
“Are you going to tell me 0 dimensions make sense?” No, but we might ask you why you take intuition as the basis for accepting truth at all. That’s a pretty big implicit assumption you’re making.
“Theories didn’t make transistors. People did at Bell Labs with trial and error. Predictions had nothing to do with it. Math had nothing to do with it.” Ah. The people did it without theories, math, or predictions? I’d like to know more! Because I don’t know how one would go about constructing anything, e.g. a transistor, otherwise. You mineswell walk into a lab with equipment and randomly jam things together. (Heya, cat? ‘Meow’ Wanna help me build a transistor? ‘Meow’ Okay, let’s place you on top of this computer, maybe that will do something—I don’t know, because I don’t even theories! ‘Meow’ Hm, that didn’t work. But at least you look warm, curled up on top of my computer tower—oh wait, I’m still making inferences based on the prediction that temperature evens out, which comes from my theory!--so I guess you might be freezing for all I know)
“Theories predict, but do not explain? What good is that?”
There’s a reason we ask new people to learn a little bit about the LW community before posting. Anticipation of experience as the measure of your belief is a fundamental concept here.
You can think of it this way: a good explanation lets us make many new predictions. And that is the sole use of explanation. (Does that sound too strong?)
EDIT: Really good explanations can be formulated mathematically, and from mathematical ‘laws’ you can derive predictions, as Desrtopa implies about Newton’s laws.
I would note that high school math isn’t really “math”. At least I don’t think of it that way. Maybe that’s because I’m a “rare case”: really good at math (though not super good like some people here) − 36 on math ACT, AIME qualifier—and then not at all exceptionally good at college math. It could have been psychological factors: maybe if I studied linear algebra now I’d understand it just fine (in fact, I suspect I would). That’s just the justification for my observation is all.