It is a bad thing, in the sense that “bad” is whatever I (normatively) value less than the other available alternatives, and value-drifted WBEs won’t be optimizing the world in a way that I value. The property of valuing the world in a different way, and correspondingly of optimizing the world in a different direction which I don’t value as much, is the “value drift” I’m talking about. In other words, if it’s not bad, there isn’t much value drift; and if there is enough value drift, it is bad.
You’re right in a sense that we’d like to avoid it, but if it occurs gradually, it feels much more like “we just changed our minds” (like we definitely don’t value “honor” as much as the ancient greeks, etc), as compared to “we and our values were wiped out”.
The problem is not with “losing our values”, it’s about the future being optimized to something other than our values. The details of the process that leads to the incorrectly optimized future are immaterial, it’s the outcome that matters. When I say “our values”, I’m referring to a fixed idea, which doesn’t depend on what happens in the future, in particular it doesn’t depend on whether there are people with these or different values in the future.
I think one reason why people (including me, in the past) have difficulty accepting the way you present this argument is that you’re speaking in too abstract terms, while many of the values that we’d actually like to preserve are ones that we appreciate the most if we consider them in “near” mode. It might work better if you gave concrete examples of ways by which there could be a catastrophic value drift, like naming Bostrom’s all-work-and-no-fun scenario where
what will maximize fitness in the future will be nothing but non-stop high-intensity drudgery, work of a drab and repetitive nature, aimed at improving the eighth decimal of some economic output measure
The latter is not necessarily a bad thing though.
It is a bad thing, in the sense that “bad” is whatever I (normatively) value less than the other available alternatives, and value-drifted WBEs won’t be optimizing the world in a way that I value. The property of valuing the world in a different way, and correspondingly of optimizing the world in a different direction which I don’t value as much, is the “value drift” I’m talking about. In other words, if it’s not bad, there isn’t much value drift; and if there is enough value drift, it is bad.
You’re right in a sense that we’d like to avoid it, but if it occurs gradually, it feels much more like “we just changed our minds” (like we definitely don’t value “honor” as much as the ancient greeks, etc), as compared to “we and our values were wiped out”.
The problem is not with “losing our values”, it’s about the future being optimized to something other than our values. The details of the process that leads to the incorrectly optimized future are immaterial, it’s the outcome that matters. When I say “our values”, I’m referring to a fixed idea, which doesn’t depend on what happens in the future, in particular it doesn’t depend on whether there are people with these or different values in the future.
I think one reason why people (including me, in the past) have difficulty accepting the way you present this argument is that you’re speaking in too abstract terms, while many of the values that we’d actually like to preserve are ones that we appreciate the most if we consider them in “near” mode. It might work better if you gave concrete examples of ways by which there could be a catastrophic value drift, like naming Bostrom’s all-work-and-no-fun scenario where
or some similar example.