My argument was making two points: that any immediate wants are not the preference [...]
Right now you’d want the ASI to maximize your preferences, even though those preferences are not yet legible/knowable to the AI (or yourself). The AI knows this, so it will allow those preferences to develop (taking for granted that that’s the only way the AI can learn them, without violating some other things you currently want, like the moral worth of the simulated entities a potential attempt at shortcutting the unfolding might create)
Like, right now you have wants the define your preferences (not object level, but define the process that would lead to your preferences being developed, and which constraints that unfolding needs to be subject to). If the AI optimizes for this, it will lead to the preferences being optimized for later.
And it will be able to do this, because this is what you currently want, and the premise is that we can get the AI to do what you want.
I don’t think this makes sense.
Right now you’d want the ASI to maximize your preferences, even though those preferences are not yet legible/knowable to the AI (or yourself). The AI knows this, so it will allow those preferences to develop (taking for granted that that’s the only way the AI can learn them, without violating some other things you currently want, like the moral worth of the simulated entities a potential attempt at shortcutting the unfolding might create)
Like, right now you have wants the define your preferences (not object level, but define the process that would lead to your preferences being developed, and which constraints that unfolding needs to be subject to). If the AI optimizes for this, it will lead to the preferences being optimized for later.
And it will be able to do this, because this is what you currently want, and the premise is that we can get the AI to do what you want.