The scaling and translations don’t correpsond to anything real. It’s simply that if I have a utility function that has value u1 in world w1 and u2 in world w2, and u1 > u2, then I prefer w1 to w2. However, if I add a constant c to my utility function, then the utility of world w1 becomes u1 + c and that of w2 become u2 + c: I still prefer world w1 to w2!
Similalrly, if you generalise to all worlds, then adding c doesn’t change my utility at all: I will always have the same preferences as I did before. An agent with the same utility function, plus a constant, will alwaus make the same decisions whatever the value of that constant. The same goes for multiplication by a (positive) scalar. Since these “affine transformations” don’t change your preferences, any reasonable system of bargaining should be indifferent to them.
The mathematical object in question is a pair of utility functions. We have to chose how we lift the concept of an affine transformation of a utility function to an affine transformation of a pair of utility functions.
One choice is to define an affine transformation of a pair of utility functions (u,v) as pair of affine transformations (f,g) which take (u,v) to (fu, gv). With this choice we cannot compare the component utility functions within a pair.
Another choice is to define an affine transformation of a pair of utility functions (u,v) by applying a single transformation f to both components getting (fu, fv). This preserves comparisons within the pair.
The key point is that our inability to do interpersonal comparisons of utility is a modeling assumption. It is something we put in to the analysis, not something that we get out of the analysis.
sketerpot is asking “why can’t we just compare the utilities?” and in the same comment noticing that there are problems with discovering utilities. What is to stop people exaggerating their utilities in order to game the bargaining system?
sketerpot’s comment pretty much nails the situation. Since permitting interpersonal comparisons of utility opens a huge can of worms an important leg of the broader project is to say: Let us assume that interpersonal comparison of utility is impossible, and press on with the analysis to find what solutions to the bargaining problem are available under this assumption.
Utility scaling/translation can mean something if you’re scaling them to normalize the average and standard deviation (or other spreading statistic) of reported marginal utilities in group decisions over time; see my comment above.
ETA: In case it’s not clear, I agree that a choice of scale for your utility function doesn’t mean anything by default, and you’re right to be pointing that out, because people mistaken assume that way too often. But if you scale it with a certain purpose in mind, like group decision making, utility can take on additional meaning.
An example of what I mean: if you and I have to make a series of binary decisions as a two-person team, we could each report, on each decision, what is the marginal utility of option 1 over option 2, using a scale of our own choosing. Reporting marginal utility eliminates the choice of translational constant, but we are still scaling our answers according to some arbitrary choice of unit. However, suppose we expect to make, say, 100 decisions per year. We can make a rule: the absolute values of the marginal utilities you report must add up to less than 1000. In other words, you should choose your units so the average absolute marginal utility you report is around 10, or slightly less. This will result in a certain balance in our decision-making procedure: you can’t claim to care more than me on every decision; we will end up having about the same amount of influence on the outcomes.
But again, this doesn’t mean numerical utilities are intrinsically comparable across individuals. The comparison depends on a choice of scale,.a choice that can be tailored to differing purposes and hence give different meanings to the numerical utilities.
The scaling and translations don’t correpsond to anything real. It’s simply that if I have a utility function that has value u1 in world w1 and u2 in world w2, and u1 > u2, then I prefer w1 to w2. However, if I add a constant c to my utility function, then the utility of world w1 becomes u1 + c and that of w2 become u2 + c: I still prefer world w1 to w2!
Similalrly, if you generalise to all worlds, then adding c doesn’t change my utility at all: I will always have the same preferences as I did before. An agent with the same utility function, plus a constant, will alwaus make the same decisions whatever the value of that constant. The same goes for multiplication by a (positive) scalar. Since these “affine transformations” don’t change your preferences, any reasonable system of bargaining should be indifferent to them.
The mathematical object in question is a pair of utility functions. We have to chose how we lift the concept of an affine transformation of a utility function to an affine transformation of a pair of utility functions.
One choice is to define an affine transformation of a pair of utility functions (u,v) as pair of affine transformations (f,g) which take (u,v) to (fu, gv). With this choice we cannot compare the component utility functions within a pair.
Another choice is to define an affine transformation of a pair of utility functions (u,v) by applying a single transformation f to both components getting (fu, fv). This preserves comparisons within the pair.
The key point is that our inability to do interpersonal comparisons of utility is a modeling assumption. It is something we put in to the analysis, not something that we get out of the analysis.
sketerpot is asking “why can’t we just compare the utilities?” and in the same comment noticing that there are problems with discovering utilities. What is to stop people exaggerating their utilities in order to game the bargaining system?
sketerpot’s comment pretty much nails the situation. Since permitting interpersonal comparisons of utility opens a huge can of worms an important leg of the broader project is to say: Let us assume that interpersonal comparison of utility is impossible, and press on with the analysis to find what solutions to the bargaining problem are available under this assumption.
Utility scaling/translation can mean something if you’re scaling them to normalize the average and standard deviation (or other spreading statistic) of reported marginal utilities in group decisions over time; see my comment above.
ETA: In case it’s not clear, I agree that a choice of scale for your utility function doesn’t mean anything by default, and you’re right to be pointing that out, because people mistaken assume that way too often. But if you scale it with a certain purpose in mind, like group decision making, utility can take on additional meaning.
An example of what I mean: if you and I have to make a series of binary decisions as a two-person team, we could each report, on each decision, what is the marginal utility of option 1 over option 2, using a scale of our own choosing. Reporting marginal utility eliminates the choice of translational constant, but we are still scaling our answers according to some arbitrary choice of unit. However, suppose we expect to make, say, 100 decisions per year. We can make a rule: the absolute values of the marginal utilities you report must add up to less than 1000. In other words, you should choose your units so the average absolute marginal utility you report is around 10, or slightly less. This will result in a certain balance in our decision-making procedure: you can’t claim to care more than me on every decision; we will end up having about the same amount of influence on the outcomes.
But again, this doesn’t mean numerical utilities are intrinsically comparable across individuals. The comparison depends on a choice of scale,.a choice that can be tailored to differing purposes and hence give different meanings to the numerical utilities.