This is not the same as CEV. CEV involves the AI extrapolating a user’s idealized future values and acting to implement them, even overriding current preferences if needed, whereas my model forbids that. In my framework, the AI never drives or predicts value change; it simply provides accurate world models and optimal plans based on the user’s current values, which only the user can update.
CEV also assumes convergence; my model protects normative autonomy and allows value diversity to persist.
I have thought a lot about anthropics.
In an infinite universe, there are an infinite number of identical observers. You cannot define a probability distribution with an infinite sample space, and infinite cardinalities cannot help you. You cannot ask for the probability that it is Monday or Tuesday upon flipping tails because there are an infinite number of observers in both cases.
Do you agree that anthropic questions like these are meaningless if we live in an infinite universe?