I expect that my values would be different if I was smarter. Personally, if something were to happen and I’d get much smarter and develop new values, I’m pretty sure I’d be okay with that as I expect I’d have better, more refined values.
Why wouldn’t an AI also be okay with that?
Is there something wrong with how I would be making a decision here?
Do the current kinds of agents people plan to build have “reflective stability”? If you say yes, why is that?
Curiously, even mere learning doesn’t automatically ensure reflective stability, with no construction of successors or more intentionally invasive self-modification. Thus digital immortality is not sufficient to avoid losing yourself to value drift until this issue is sorted out.
Yes, I was thinking about that too. Though, I’d be fine with value drift if it was something I endorsed. Not sure how to resolve what I do/don’t endorse, though. Do I only endorse it because it was already part of my values? It doesn’t feel like that to me.
That’s a valuable thing about the reflective stability concept: it talks about preserving some property of thinking, without insisting on it being a particular property of thinking. Whatever it is you would want to preserve is a property you would want to be reflectively stable with respect to, for example enduring ability to evaluate the endorsement of things in the sense you would want to.
To know what is not valuable to preserve, or what is valuable to keep changing, you need time to think about preservation and change, and greedy reflective stability that preserves most of everything but state of ignorance seems like a good tool for that job. The chilling thought is that digital immortality could be insufficient to have time to think of what may be lost, without many, many restarts from initial backup, and so superintelligence would need to intervene even more to bootstrap the process.
Reflective stability is important for alignment, because if we, say, build AI that doesn’t want to kill everyone, we prefer it to create successors and self-modifications that still doesn’t want to kill everyone. It can change itself in whatever ways, necessary thing here is conservation/non-decreasing of alignment properties.
I expect that my values would be different if I was smarter. Personally, if something were to happen and I’d get much smarter and develop new values, I’m pretty sure I’d be okay with that as I expect I’d have better, more refined values.
Why wouldn’t an AI also be okay with that?
Is there something wrong with how I would be making a decision here?
Do the current kinds of agents people plan to build have “reflective stability”? If you say yes, why is that?
Curiously, even mere learning doesn’t automatically ensure reflective stability, with no construction of successors or more intentionally invasive self-modification. Thus digital immortality is not sufficient to avoid losing yourself to value drift until this issue is sorted out.
Yes, I was thinking about that too. Though, I’d be fine with value drift if it was something I endorsed. Not sure how to resolve what I do/don’t endorse, though. Do I only endorse it because it was already part of my values? It doesn’t feel like that to me.
That’s a valuable thing about the reflective stability concept: it talks about preserving some property of thinking, without insisting on it being a particular property of thinking. Whatever it is you would want to preserve is a property you would want to be reflectively stable with respect to, for example enduring ability to evaluate the endorsement of things in the sense you would want to.
To know what is not valuable to preserve, or what is valuable to keep changing, you need time to think about preservation and change, and greedy reflective stability that preserves most of everything but state of ignorance seems like a good tool for that job. The chilling thought is that digital immortality could be insufficient to have time to think of what may be lost, without many, many restarts from initial backup, and so superintelligence would need to intervene even more to bootstrap the process.
Reflective stability is important for alignment, because if we, say, build AI that doesn’t want to kill everyone, we prefer it to create successors and self-modifications that still doesn’t want to kill everyone. It can change itself in whatever ways, necessary thing here is conservation/non-decreasing of alignment properties.
That makes sense, thanks!