Change in values of the future agents, however sudden of gradual, means that the Future (the whole freackin’ Future!) won’t be optimized according to our values, won’t be anywhere as good as it could’ve been otherwise.
Even at the personal level our values change with time, somewhat dramatically as we grow in maturity from children to adolescents to adults, and more generally just as a process of learning modifying our belief networks.
Hanson’s analogy to new generations of humans is especially important—value drift occurs naturally. A fully general argument against any form of value drift/evolution is equivalent to a fully general argument against begetting new generations of humans. Thus it is not value drift that equates to deathism, but the lack of value drift.
Out of the space of all trajectories value evolution could take, some are going to be much worse or much better from the perspective of our current value system.
All we have to do (and in fact the best we possibly can do) is guide that trajectory in a favorable direction from our current vantage point.
Ignore AGI for a moment and imagine a much more gradual future where human intelligence enhancement continues but asymptotically reaches a limit while human population somehow continues to explode exponentially (via wormholes for example). Run that millions of years forward and you get a future that is unimaginably different than our present, with future beings that are probably quite unlike humans.
A runaway Singularity is more or less equivalent to a portal into that future. It just compresses or accelerates it.
I would be that many of those futures, probably even most, will be considerably better overall than our past from the perspective of our current value system, and I would give good odds that they would be better than our present from the perspective of our current value system.
We don’t need to suddenly halt all value drift in order to create a good future, and even if we could (questionable) it would probably be counterproductive or outright disasterous.
Preserving morality doesn’t mean that we have nothing more to decide, that there are no new ideas to create. New ideas (and new normative considerations) can be created while keeping morality. Value drift is destruction of morality, but lack of value drift doesn’t preclude development, especially since morality endorses development (in the ways it does).
Every improvement is a change, but not every change is an improvement. Current normative considerations that we have (i.e. morality) describes what kinds of change we should consider improvement. Forgetting what changes we consider to be improvement (i.e. value drift) will result in future change that is not moral (i.e. not improvement from the current standpoint).
I find myself in agreement with all this, except perhaps:
Forgetting what changes we consider to be improvement (i.e. value drift) will result in future change that is not moral (i.e. not improvement from the current standpoint).
Perhaps we are using the term ‘value drift’ differently.
Would you consider the change from our ancestors values to modern western values to be ‘value drift’? What about change from the values of the age-10 version of yourself versus the age-20 or the current?
The current world is far from an idealization of my ancestor’s values, and my current self is far from an idealized extrapolation of it’s future according to it’s values at the time.
I don’t find this type of ‘value drift’ to be equivalent to “forgetting what changes we consider to be improvement”. At each step we are making changes to ourselves that we do believe are improvements at the time according to our values at the time, and over time this process can lead to significant changes in our values. Even so, this does not imply that the future changes are not moral (not improvement from the current standpoint).
Change in values implies that the future will not be ideal from the perspective of current values, but from this it does not follow that all such futures are worse from the perspective of our current values (because all else is not equal).
Even at the personal level our values change with time, somewhat dramatically as we grow in maturity from children to adolescents to adults, and more generally just as a process of learning modifying our belief networks.
Hanson’s analogy to new generations of humans is especially important—value drift occurs naturally. A fully general argument against any form of value drift/evolution is equivalent to a fully general argument against begetting new generations of humans. Thus it is not value drift that equates to deathism, but the lack of value drift.
Out of the space of all trajectories value evolution could take, some are going to be much worse or much better from the perspective of our current value system.
All we have to do (and in fact the best we possibly can do) is guide that trajectory in a favorable direction from our current vantage point.
Ignore AGI for a moment and imagine a much more gradual future where human intelligence enhancement continues but asymptotically reaches a limit while human population somehow continues to explode exponentially (via wormholes for example). Run that millions of years forward and you get a future that is unimaginably different than our present, with future beings that are probably quite unlike humans.
A runaway Singularity is more or less equivalent to a portal into that future. It just compresses or accelerates it.
I would be that many of those futures, probably even most, will be considerably better overall than our past from the perspective of our current value system, and I would give good odds that they would be better than our present from the perspective of our current value system.
We don’t need to suddenly halt all value drift in order to create a good future, and even if we could (questionable) it would probably be counterproductive or outright disasterous.
Preserving morality doesn’t mean that we have nothing more to decide, that there are no new ideas to create. New ideas (and new normative considerations) can be created while keeping morality. Value drift is destruction of morality, but lack of value drift doesn’t preclude development, especially since morality endorses development (in the ways it does).
Every improvement is a change, but not every change is an improvement. Current normative considerations that we have (i.e. morality) describes what kinds of change we should consider improvement. Forgetting what changes we consider to be improvement (i.e. value drift) will result in future change that is not moral (i.e. not improvement from the current standpoint).
I find myself in agreement with all this, except perhaps:
Perhaps we are using the term ‘value drift’ differently.
Would you consider the change from our ancestors values to modern western values to be ‘value drift’? What about change from the values of the age-10 version of yourself versus the age-20 or the current?
The current world is far from an idealization of my ancestor’s values, and my current self is far from an idealized extrapolation of it’s future according to it’s values at the time.
I don’t find this type of ‘value drift’ to be equivalent to “forgetting what changes we consider to be improvement”. At each step we are making changes to ourselves that we do believe are improvements at the time according to our values at the time, and over time this process can lead to significant changes in our values. Even so, this does not imply that the future changes are not moral (not improvement from the current standpoint).
Change in values implies that the future will not be ideal from the perspective of current values, but from this it does not follow that all such futures are worse from the perspective of our current values (because all else is not equal).