I’m not sure what do you see in the distinction between simple preference and complex preference. No matter how simple an imperfect agent is, you face a problem of going from imperfect decision-making to ideal preference order.
I’m not sure what do you see in the distinction between simple preference and complex preference. No matter how simple an imperfect agent is, you face a problem of going from imperfect decision-making to ideal preference order.
I don’t mean simple or complicated preferences. I mean a simple mind (perhaps simple was a bad choice of terminology). My “simple mind” is a mind that perfectly knows it’s utility function (and has a well-defined utility function to begin with). It’s just an abstraction to better understand where shouldness comes from.
Sounds about right, except that I wouldn’t call this anything close to a summary of the whole position. Also, compare the status of morality with that of probability (e.g. Probability is Subjectively Objective, Can Counterfactuals Be True?, Math is Subjunctively Objective).
I’m not sure what do you see in the distinction between simple preference and complex preference. No matter how simple an imperfect agent is, you face a problem of going from imperfect decision-making to ideal preference order.
I don’t mean simple or complicated preferences. I mean a simple mind (perhaps simple was a bad choice of terminology). My “simple mind” is a mind that perfectly knows it’s utility function (and has a well-defined utility function to begin with). It’s just an abstraction to better understand where shouldness comes from.