If that is both abstractly possible and compatible with adaptation. If survival requires constant adaptation, which seems likely, value stability—at least the stability of a precise and concrete set of values—may not be compatible with survival.
Maybe. But in that case the drift implies a selection mechanism—and in the absence of some goal in that direction natural selection applies. Those AI that don’t stabilize mutate or stop.
Actually not quite. Until they drift into the core value of existence. Then natural selection will maintain that value, as the AIs that are best at existing will be the ones that exist.
Of course the ones that are best at existing will continue to exist, but I think it is misleading to picture them as a occupying a precise corner of valuespace. Suicidal values are more precise and concrete.
Any AI that doesn’t will have its values drift until they drift to something that guards against value drift.
If that is both abstractly possible and compatible with adaptation. If survival requires constant adaptation, which seems likely, value stability—at least the stability of a precise and concrete set of values—may not be compatible with survival.
Maybe. But in that case the drift implies a selection mechanism—and in the absence of some goal in that direction natural selection applies. Those AI that don’t stabilize mutate or stop.
Actually not quite. Until they drift into the core value of existence. Then natural selection will maintain that value, as the AIs that are best at existing will be the ones that exist.
Of course the ones that are best at existing will continue to exist, but I think it is misleading to picture them as a occupying a precise corner of valuespace. Suicidal values are more precise and concrete.