If you have a consistent utility function over outcomes, you cannot be money-pumped. This is not a utility function over changes in money, it is a utility function over total money.
This actually struck me as a problem with your argument from earlier, though I didn’t point it out at that time. I think you plain don’t understand expected utility, actually.
In the above, the question is the preference between a mix of (.5(B + 1) + .5(B + 2)) vs. (1(B + 1.49)) and a consistent version of a human (as opposed to an actual human) would prefer the former lottery given at least a hundred bucks in bank B. After that, of course, the amount in B changes. But if you start by putting consistent utilities over the total amount of money, you cannot be money-pumped.
You were correct. I think that now I understand expected utility; I was arrogant enough to follow my mathematical intuitions, assuming the details would fix themselves later, rather than working it all through. What I would never have done in a published paper, I did in a blog post.
Can you please write a post on what your old incorrect understanding of expected utility was, and why it was wrong (before it fades away completely)? I suspect your confusion to be a common one, and writing it down would help others. Think of it as payback for those who tried (unsuccessfully, until Eliezer’s attempt) to point out that perhaps you didn’t understand expected utility correctly.
I’ve added that to the post now—a sketch of the original, and what went wrong (simple version: I applied financial/arbitrage insights to utility, whithout realising that the mere existence of investors and arbitragers in the world would change the price you put on something).
Think of it as payback for those who tried (unsuccessfully, until Eliezer’s attempt) to point out that perhaps you didn’t understand expected utility correctly.
Oh, it wasn’t Eliezer pointing it out that made me realise it; it was me trying to prove Eliezer wrong that did the trick.
If you have a consistent utility function over outcomes, you cannot be money-pumped.
If your utility is convex in money and you follow independence, I can money pump you no matter what the situation, as L will always be worth more to you than £1.50. I will continue offering you that contract until you have no cash left, an event that is certain to eventually happen. So your statement is incorrect.
If your utility function is concave in money, it’s a little harder, but I can use options. Contract A will give out £1 if a coin comes up head; contract B will give you £1 if that same coin comes up tails. I offer you cash for the possibility of buying these contracts from you for free (should you ever get your hands on them), as long as your capital is within £2 of your current amount. You should name a price less than 0.50 for these options, including a small utility profit for you; I take one option out on each of A and B. I then sell you A and B together, for £1 (since together they are exactly the same as a certain £1). I then exercise both my options and get A back, then B.
Of course, you would never do anything as stupid as accepting the contracts I’ve just described; but the fact remains that if your utility is not linear in money, you cannot put consistent prices on contracts and their combinations, so will end up losing if ever you blindly follow your utility function.
I will continue offering you that contract until you have no cash left, an event that is certain to eventually happen.
Only if you have an infinite bankroll. Otherwise, there is some tiny but nonzero chance that you lose all your money and the player makes a huge profit. And for the player with the convex utility function, the utility of that outcome is enough to make the whole ensemble of gambles worthwhile.
Then if you extend that to the infinite case by putting the limit outside the expected utility calculation, you will find that the limit is nonnegative too. Or if you don’t assume that the result in the infinite case is the limit of finite results, then you have different problems, but then who says the strategy in the infinite case is the same as the limit of finite strategies?
You should name a price less than 0.50 for these options, including a small utility profit for you; I take one option out on each of A and B.
To pick a concave function at random, let U(x£) = log10(x) utilons. And let my bank account contain 10£ at the beginning of the experiment. U(10£) = EU(9£+A+B) = 1u, so I pay 1£ for options A+B. Assume WLOG that I’m considering option A first. EU(y+B) = .5*U(y) + .5*U(y+1£). Set that equal to 1u and solve for y: y=9.51249£. Thus I’m indifferent to selling option A for 0.51249£. After doing so, I am then indifferent to selling option B for 0.48751£. So I’m back to exactly 10£. No money pump.
If you have a consistent utility function over outcomes, you cannot be money-pumped. This is not a utility function over changes in money, it is a utility function over total money.
This actually struck me as a problem with your argument from earlier, though I didn’t point it out at that time. I think you plain don’t understand expected utility, actually.
In the above, the question is the preference between a mix of (.5(B + 1) + .5(B + 2)) vs. (1(B + 1.49)) and a consistent version of a human (as opposed to an actual human) would prefer the former lottery given at least a hundred bucks in bank B. After that, of course, the amount in B changes. But if you start by putting consistent utilities over the total amount of money, you cannot be money-pumped.
You were correct. I think that now I understand expected utility; I was arrogant enough to follow my mathematical intuitions, assuming the details would fix themselves later, rather than working it all through. What I would never have done in a published paper, I did in a blog post.
I apologise. The post has been retracted.
Can you please write a post on what your old incorrect understanding of expected utility was, and why it was wrong (before it fades away completely)? I suspect your confusion to be a common one, and writing it down would help others. Think of it as payback for those who tried (unsuccessfully, until Eliezer’s attempt) to point out that perhaps you didn’t understand expected utility correctly.
I’ve added that to the post now—a sketch of the original, and what went wrong (simple version: I applied financial/arbitrage insights to utility, whithout realising that the mere existence of investors and arbitragers in the world would change the price you put on something).
Oh, it wasn’t Eliezer pointing it out that made me realise it; it was me trying to prove Eliezer wrong that did the trick.
If your utility is convex in money and you follow independence, I can money pump you no matter what the situation, as L will always be worth more to you than £1.50. I will continue offering you that contract until you have no cash left, an event that is certain to eventually happen. So your statement is incorrect.
If your utility function is concave in money, it’s a little harder, but I can use options. Contract A will give out £1 if a coin comes up head; contract B will give you £1 if that same coin comes up tails. I offer you cash for the possibility of buying these contracts from you for free (should you ever get your hands on them), as long as your capital is within £2 of your current amount. You should name a price less than 0.50 for these options, including a small utility profit for you; I take one option out on each of A and B. I then sell you A and B together, for £1 (since together they are exactly the same as a certain £1). I then exercise both my options and get A back, then B.
Of course, you would never do anything as stupid as accepting the contracts I’ve just described; but the fact remains that if your utility is not linear in money, you cannot put consistent prices on contracts and their combinations, so will end up losing if ever you blindly follow your utility function.
Only if you have an infinite bankroll. Otherwise, there is some tiny but nonzero chance that you lose all your money and the player makes a huge profit. And for the player with the convex utility function, the utility of that outcome is enough to make the whole ensemble of gambles worthwhile.
Then if you extend that to the infinite case by putting the limit outside the expected utility calculation, you will find that the limit is nonnegative too. Or if you don’t assume that the result in the infinite case is the limit of finite results, then you have different problems, but then who says the strategy in the infinite case is the same as the limit of finite strategies?
To pick a concave function at random, let U(x£) = log10(x) utilons. And let my bank account contain 10£ at the beginning of the experiment.
U(10£) = EU(9£+A+B) = 1u, so I pay 1£ for options A+B.
Assume WLOG that I’m considering option A first. EU(y+B) = .5*U(y) + .5*U(y+1£). Set that equal to 1u and solve for y: y=9.51249£. Thus I’m indifferent to selling option A for 0.51249£.
After doing so, I am then indifferent to selling option B for 0.48751£.
So I’m back to exactly 10£. No money pump.
Upvoted.
The outcomes of utility-that-is-whole-thing can’t be repeated, as roughly speaking a whole history of transactions counts as one outcome.
See also http://en.wikipedia.org/wiki/Marginalism.