Er… if you answered why you care, I’m failing to find where you did so. Listing what you care about doesn’t answer the question.
I don’t think it’s controversial that ‘why do you care about that’ is either unanswerable, or answerable only in terms of something like evolution or neurochemestry, in the case of terminal values.
Listing what you care about doesn’t answer the question.
There is a subtext to this question, which is that I believe we typically assume—until it is demonstrated otherwise—that our values are similar or overlap significantly, so it is natural to ask ‘why do you value this’ when maybe we really mean ‘what terminal value do you think you’re optimizing with this’? Disagreements about policy or ‘what we should care about’ are then often based on different beliefs about what achieves what than different values. It is true that if our difference in caring turns out to be based upon different values, or weighting values differently, then there’s nothing much to discuss. Since I do value knowledge too, I wanted to further qualifyhow Wei Dai values knowledge, because I don’t see how nudging the far future one way or another is going to increase Wei Dei’s total knowledge.
Byrnema had a specific objection to human values that are “arbitrary”, and I think my response addressed that. To be more explicit, all values are vulnerable to the charge of being arbitrary, but seeking knowledge seems less vulnerable than others, and that seems to explain why I care more about the future than the average person. I was also trying to point out to Byrnema that perhaps she already cares more about the future than most, but didn’t realize it.
Er… if you answered why you care, I’m failing to find where you did so. Listing what you care about doesn’t answer the question.
I don’t think it’s controversial that ‘why do you care about that’ is either unanswerable, or answerable only in terms of something like evolution or neurochemestry, in the case of terminal values.
There is a subtext to this question, which is that I believe we typically assume—until it is demonstrated otherwise—that our values are similar or overlap significantly, so it is natural to ask ‘why do you value this’ when maybe we really mean ‘what terminal value do you think you’re optimizing with this’? Disagreements about policy or ‘what we should care about’ are then often based on different beliefs about what achieves what than different values. It is true that if our difference in caring turns out to be based upon different values, or weighting values differently, then there’s nothing much to discuss. Since I do value knowledge too, I wanted to further qualify how Wei Dai values knowledge, because I don’t see how nudging the far future one way or another is going to increase Wei Dei’s total knowledge.
Byrnema had a specific objection to human values that are “arbitrary”, and I think my response addressed that. To be more explicit, all values are vulnerable to the charge of being arbitrary, but seeking knowledge seems less vulnerable than others, and that seems to explain why I care more about the future than the average person. I was also trying to point out to Byrnema that perhaps she already cares more about the future than most, but didn’t realize it.