Upvoted for trying to say something useful about CEV.
Whenever revealed preferences are non-transitive or non-independent, use the person’s stated meta-preferences to remove the issue.
It seems odd that this is the only step where you’re using meta-preferences: I would have presumed that any theory would start off from giving a person’s approved preferences considerably stronger weight than non-approved ones. (Though since approved desires are often far and non-approved ones near, one’s approved ideal self might be completely unrealistic and not what they’d actually want. So non-approved ones should also be taken into account somehow.)
What do you mean by “actually want”? You seem to be coming dangerously close to the vomit fallacy: “Humans sometimes vomit. By golly, the future must be full of vomit!”
Would not actually want X = would not endorse X after finding out the actual consequences of X; would not have X as a preference after reaching reflective equilibrium.
Oh I see, by “approved ideal self” you meant something different than “self after reaching reflective equilibrium”. So instead of fiddling around with revealed preferences, why not just simulate the person reaching reflective equilibrium and then ask the person what preferences he or she endorses?
Upvoted for trying to say something useful about CEV.
It seems odd that this is the only step where you’re using meta-preferences: I would have presumed that any theory would start off from giving a person’s approved preferences considerably stronger weight than non-approved ones. (Though since approved desires are often far and non-approved ones near, one’s approved ideal self might be completely unrealistic and not what they’d actually want. So non-approved ones should also be taken into account somehow.)
What do you mean by “actually want”? You seem to be coming dangerously close to the vomit fallacy: “Humans sometimes vomit. By golly, the future must be full of vomit!”
Would not actually want X = would not endorse X after finding out the actual consequences of X; would not have X as a preference after reaching reflective equilibrium.
Oh I see, by “approved ideal self” you meant something different than “self after reaching reflective equilibrium”. So instead of fiddling around with revealed preferences, why not just simulate the person reaching reflective equilibrium and then ask the person what preferences he or she endorses?
That was my first thought on reading the “revealed preferences” part of the post. Extrapolation first—then volition.
Could be done—but is harder to define (what counts as a reflective equilibrium?) and harder to model (what do you expect your reflective equilibrium?)