Suppose I have a state consisting of (a,b): number of apples, number of bananas. My utility is Ua(a)+Ub(b) - i.e. I have no terminal desire for money, and my utility for apples and bananas is separable (aka a sum, with each term dependent on only one of the two). I also have a budget constraint: Paa+Pbb≤m, i.e. price of apples times number of apples plus price of bananas times number of bananas is at most m, the amount of money I start with.
A useful technique for this sort of problem is to separate it into two optimization problems (thus the term “separable” for the utility). Here’s how that works:
First, pick some amount of money ma to budget to apples, and budget the rest mb=m−ma to bananas.
Then, maximize Ua(a) subject to Paa≤ma, and separately maximize Ub(b) subject to Pbb≤mb.
(Note: if a and b were higher-dimensional, i.e. each involved a whole bunch of variables, then this would still work, and indeed that’s the case in which it’s typically interesting/useful to do this.)
Assuming that each sub-problem can be solved efficiently, all we have to do is adjust the budget-split (ma,mb) to be optimal. That’s where the shadow price comes in: the shadow prices of each sub-optimization problem tells us how much utility we would have to trade off in the a-problem to get some amount of utility in the b-problem, or vice-versa. To put it differently: the shadow price summarizes all the information about the a-decisions which is relevant to the b-decisions. To put it yet another way: the shadow price is the opportunity cost of spending a marginal dollar on a rather than b.
And this exactly the way people use shadow prices in practice, all the time. When I’m trying to avoid spending a lot of money on a new gadget, it’s not because I value money as an end in itself, it’s because I can use money saved to gain utility in other problems in my life. When trying to decide how much to spend on the gadget, I don’t need to directly think about all the other problems in my life—I don’t have to think about it all as one giant optimization problem. Instead, I just have a rough intuition for how much utility I can get in the other problems of my life from $X extra, and then I decide whether I can get more utility than that by spending $X extra on the new gadget. That rough intuition for how-much-utility-$X-is-can-buy is the shadow price.
Now we’re ready to answer the question. The shadow price quantifies the opportunity cost, so if I’m paid my shadow price, then that’s just barely enough to cover my opportunity cost. It’s reimbursing me for the opportunity cost of doing-the-thing rather than spending my time and resources on other problems in my life, but it’s just reimbursing me, nothing extra. I’m not actually eager to take the deal, I’m indifferent.
The idea of a cheerful price is that it does give me something extra. It’s not just paying my opportunity cost, I’m not just indifferent between taking the deal or not; it’s giving me more utility than I’d get from spending my time and resources on other problems in my life.
Your example is interesting and clarifies exchange rates. However,
The shadow price quantifies the opportunity cost, so if I’m paid my shadow price, then that’s just barely enough to cover my opportunity cost.
This is an interpretive point I’d like to focus on. When you move a constraint, in this case with price, the underlying equilibrium of the optimization shifts. From this perspective your usage of the word ‘barely’ stops making sense to me. If you were to ‘overshoot’ you wouldn’t be optimal in the new optimization problem.
At this point I understand that the cheerful price will be equivalent to or more than the shadow price. You want to be able to shift the equilibrium point and have slack left over. It just seems obvious, to me, that shadow price isn’t an exactly measurable thing in this context and so you’d naturally be led to make a confidence interval (belief) for it. Cheerful price is just the upper estimate on that. Hence, I’m surprised why this is being treated as a new / distinct concept.
Suppose I have a state consisting of (a,b): number of apples, number of bananas. My utility is Ua(a)+Ub(b) - i.e. I have no terminal desire for money, and my utility for apples and bananas is separable (aka a sum, with each term dependent on only one of the two). I also have a budget constraint: Paa+Pbb≤m, i.e. price of apples times number of apples plus price of bananas times number of bananas is at most m, the amount of money I start with.
A useful technique for this sort of problem is to separate it into two optimization problems (thus the term “separable” for the utility). Here’s how that works:
First, pick some amount of money ma to budget to apples, and budget the rest mb=m−ma to bananas.
Then, maximize Ua(a) subject to Paa≤ma, and separately maximize Ub(b) subject to Pbb≤mb.
(Note: if a and b were higher-dimensional, i.e. each involved a whole bunch of variables, then this would still work, and indeed that’s the case in which it’s typically interesting/useful to do this.)
Assuming that each sub-problem can be solved efficiently, all we have to do is adjust the budget-split (ma,mb) to be optimal. That’s where the shadow price comes in: the shadow prices of each sub-optimization problem tells us how much utility we would have to trade off in the a-problem to get some amount of utility in the b-problem, or vice-versa. To put it differently: the shadow price summarizes all the information about the a-decisions which is relevant to the b-decisions. To put it yet another way: the shadow price is the opportunity cost of spending a marginal dollar on a rather than b.
And this exactly the way people use shadow prices in practice, all the time. When I’m trying to avoid spending a lot of money on a new gadget, it’s not because I value money as an end in itself, it’s because I can use money saved to gain utility in other problems in my life. When trying to decide how much to spend on the gadget, I don’t need to directly think about all the other problems in my life—I don’t have to think about it all as one giant optimization problem. Instead, I just have a rough intuition for how much utility I can get in the other problems of my life from $X extra, and then I decide whether I can get more utility than that by spending $X extra on the new gadget. That rough intuition for how-much-utility-$X-is-can-buy is the shadow price.
Now we’re ready to answer the question. The shadow price quantifies the opportunity cost, so if I’m paid my shadow price, then that’s just barely enough to cover my opportunity cost. It’s reimbursing me for the opportunity cost of doing-the-thing rather than spending my time and resources on other problems in my life, but it’s just reimbursing me, nothing extra. I’m not actually eager to take the deal, I’m indifferent.
The idea of a cheerful price is that it does give me something extra. It’s not just paying my opportunity cost, I’m not just indifferent between taking the deal or not; it’s giving me more utility than I’d get from spending my time and resources on other problems in my life.
Your example is interesting and clarifies exchange rates. However,
This is an interpretive point I’d like to focus on. When you move a constraint, in this case with price, the underlying equilibrium of the optimization shifts. From this perspective your usage of the word ‘barely’ stops making sense to me. If you were to ‘overshoot’ you wouldn’t be optimal in the new optimization problem.
At this point I understand that the cheerful price will be equivalent to or more than the shadow price. You want to be able to shift the equilibrium point and have slack left over. It just seems obvious, to me, that shadow price isn’t an exactly measurable thing in this context and so you’d naturally be led to make a confidence interval (belief) for it. Cheerful price is just the upper estimate on that. Hence, I’m surprised why this is being treated as a new / distinct concept.