Is this a good summary of the ideas presented here or did I miss something important?
1) We need a correct definition for all the (apparently very fuzzy) concepts that CEV relies upon.
2a) People appear to have multiple “selves”; the preferences of each “self” are more consistent than the aggregation of all of them.
2b) If you strip away all the incoherent preferences, you might strip away most of the stuff you really care about.
3) A much smarter version of me does not resemble me any more. That person’s preferences are not my preferences.
4a) We are behavior-executors, not a utility-maximizers. The notion of a “preference” or “goal” exists in the map not the territory. Asking “what is someone’s true preference” is like asking whether it’s a blegg or a rube. etc.
4b) Our reports of our own preferences are unreliable.
Is this a good summary of the ideas presented here or did I miss something important?
1) We need a correct definition for all the (apparently very fuzzy) concepts that CEV relies upon.
2a) People appear to have multiple “selves”; the preferences of each “self” are more consistent than the aggregation of all of them.
2b) If you strip away all the incoherent preferences, you might strip away most of the stuff you really care about.
3) A much smarter version of me does not resemble me any more. That person’s preferences are not my preferences.
4a) We are behavior-executors, not a utility-maximizers. The notion of a “preference” or “goal” exists in the map not the territory. Asking “what is someone’s true preference” is like asking whether it’s a blegg or a rube. etc.
4b) Our reports of our own preferences are unreliable.
4c) CEV doesn’t appear to address “Want to want”.
You have not considered the failure mode called “Defeated by Evolution”
Other than that, it is a great really short summary. Why don’t you do the same to Part2? :)