“The next question I want to consider is a normative one: is pre-rationality rational? Pre-rationality says that we should reason as if we were pre-agents who learned about our prior assignments as information, instead of just taking those priors as given. But then, shouldn’t we also act as if we were pre-agents who learned about our utility function assignments as information, instead of taking them as given?”
As I understand it, preferences and therefore utility functions are by nature a-rational since they are their own ends. Choosing to alter your own utility function involves another part of the same function deciding that there is more overall utility in changing the other part (for example, wishing that you didn’t like the taste of chocolate so you won’t get fat). Thus we cannot escape our priors in this regard.
I have been more concerned with the fickle nature of utility functions, and what that means for making predictions of future utility, especially in the face of irreversible decisions (a good example is the decision to birth a child, though going to gradate school fits in many ways too). Should humans reduce future utility calculations to only those functions which remain stable over time and in many circumstances? I fear much subtlety is lost if we consider preference too broadly, but that might be my present self selfishly weighting her preferences too heavily.
“The next question I want to consider is a normative one: is pre-rationality rational? Pre-rationality says that we should reason as if we were pre-agents who learned about our prior assignments as information, instead of just taking those priors as given. But then, shouldn’t we also act as if we were pre-agents who learned about our utility function assignments as information, instead of taking them as given?”
As I understand it, preferences and therefore utility functions are by nature a-rational since they are their own ends. Choosing to alter your own utility function involves another part of the same function deciding that there is more overall utility in changing the other part (for example, wishing that you didn’t like the taste of chocolate so you won’t get fat). Thus we cannot escape our priors in this regard.
I have been more concerned with the fickle nature of utility functions, and what that means for making predictions of future utility, especially in the face of irreversible decisions (a good example is the decision to birth a child, though going to gradate school fits in many ways too). Should humans reduce future utility calculations to only those functions which remain stable over time and in many circumstances? I fear much subtlety is lost if we consider preference too broadly, but that might be my present self selfishly weighting her preferences too heavily.