I think just by asking the question “why should I care?”, you probably already care more than most, who just go on doing what they always did without a second thought.
If I ask myself “why do I care?”, the answer is that I don’t seem to care much about the standard status symbols and consumer goods (bigger houses, faster cars, etc.), so what is left? Well for one thing, I care about knowledge, i.e., finding answers to questions that puzzle me, and I think I can do that much better in some futures than in others.
Er… if you answered why you care, I’m failing to find where you did so. Listing what you care about doesn’t answer the question.
I don’t think it’s controversial that ‘why do you care about that’ is either unanswerable, or answerable only in terms of something like evolution or neurochemestry, in the case of terminal values.
Listing what you care about doesn’t answer the question.
There is a subtext to this question, which is that I believe we typically assume—until it is demonstrated otherwise—that our values are similar or overlap significantly, so it is natural to ask ‘why do you value this’ when maybe we really mean ‘what terminal value do you think you’re optimizing with this’? Disagreements about policy or ‘what we should care about’ are then often based on different beliefs about what achieves what than different values. It is true that if our difference in caring turns out to be based upon different values, or weighting values differently, then there’s nothing much to discuss. Since I do value knowledge too, I wanted to further qualifyhow Wei Dai values knowledge, because I don’t see how nudging the far future one way or another is going to increase Wei Dei’s total knowledge.
Byrnema had a specific objection to human values that are “arbitrary”, and I think my response addressed that. To be more explicit, all values are vulnerable to the charge of being arbitrary, but seeking knowledge seems less vulnerable than others, and that seems to explain why I care more about the future than the average person. I was also trying to point out to Byrnema that perhaps she already cares more about the future than most, but didn’t realize it.
To what extent does your caring about the future depend upon you being there to experience it?
Then my next question would be, how important is your identity to this value? For example, do you have a strong preference whether it is “you” that gains more and more knowledge of the universe, or any other mind?
I might change my mind in the future, but right now my answers are “to a large extent” and “pretty important”.
Why do you care what my values are, though, or why they are what they are? I find it fascinating that “value-seeking” is a common behavior among rationalist-wannabes (and I’m as guilty of it as anyone). It’s almost as if the most precious resource in this universe isn’t negentropy, but values.
ETA: I see you just answered this in your reply to Adelene Dawner:
I wanted to further qualify how Wei Dai values knowledge, because I don’t see how nudging the far future one way or another is going to increase Wei Dei’s total knowledge.
I expect that I can survive indefinitely in some futures. Does that answer your question?
It’s almost as if the most precious resource in this universe isn’t negentropy, but values.
That’s an amusing observation, with some amount of truth to it.
The reason why I was asking is because I was seeking to understand why you care and I don’t.
Given your reply, I think our difference in caring can be explained by the fact that when I imagine the far future, I don’t imagine myself there. I’m also less attached to my identity; I wouldn’t mind experiencing the optimization of the universe from the point of view of an alien mind, with different values. (This last bit is relevant if you want the future to be good just the sake of it being good, even if you’re not there.)
I think just by asking the question “why should I care?”, you probably already care more than most, who just go on doing what they always did without a second thought.
If I ask myself “why do I care?”, the answer is that I don’t seem to care much about the standard status symbols and consumer goods (bigger houses, faster cars, etc.), so what is left? Well for one thing, I care about knowledge, i.e., finding answers to questions that puzzle me, and I think I can do that much better in some futures than in others.
Er… if you answered why you care, I’m failing to find where you did so. Listing what you care about doesn’t answer the question.
I don’t think it’s controversial that ‘why do you care about that’ is either unanswerable, or answerable only in terms of something like evolution or neurochemestry, in the case of terminal values.
There is a subtext to this question, which is that I believe we typically assume—until it is demonstrated otherwise—that our values are similar or overlap significantly, so it is natural to ask ‘why do you value this’ when maybe we really mean ‘what terminal value do you think you’re optimizing with this’? Disagreements about policy or ‘what we should care about’ are then often based on different beliefs about what achieves what than different values. It is true that if our difference in caring turns out to be based upon different values, or weighting values differently, then there’s nothing much to discuss. Since I do value knowledge too, I wanted to further qualify how Wei Dai values knowledge, because I don’t see how nudging the far future one way or another is going to increase Wei Dei’s total knowledge.
Byrnema had a specific objection to human values that are “arbitrary”, and I think my response addressed that. To be more explicit, all values are vulnerable to the charge of being arbitrary, but seeking knowledge seems less vulnerable than others, and that seems to explain why I care more about the future than the average person. I was also trying to point out to Byrnema that perhaps she already cares more about the future than most, but didn’t realize it.
To what extent does your caring about the future depend upon you being there to experience it?
Then my next question would be, how important is your identity to this value? For example, do you have a strong preference whether it is “you” that gains more and more knowledge of the universe, or any other mind?
I might change my mind in the future, but right now my answers are “to a large extent” and “pretty important”.
Why do you care what my values are, though, or why they are what they are? I find it fascinating that “value-seeking” is a common behavior among rationalist-wannabes (and I’m as guilty of it as anyone). It’s almost as if the most precious resource in this universe isn’t negentropy, but values.
ETA: I see you just answered this in your reply to Adelene Dawner:
I expect that I can survive indefinitely in some futures. Does that answer your question?
That’s an amusing observation, with some amount of truth to it.
The reason why I was asking is because I was seeking to understand why you care and I don’t.
Given your reply, I think our difference in caring can be explained by the fact that when I imagine the far future, I don’t imagine myself there. I’m also less attached to my identity; I wouldn’t mind experiencing the optimization of the universe from the point of view of an alien mind, with different values. (This last bit is relevant if you want the future to be good just the sake of it being good, even if you’re not there.)