Since we rejected independence, we must now consider the lotteries when taken as a whole, rather than just seeing them individually. When considered as a whole, “reasonable” lotteries are more tightly bunched around their total mean than they are individually. Hence the more lotteries we consider, the more we should treat them as if only their mean mattered.
You are absolutely correct, and it pains me because this issue should have been settled a long time ago.
When Eliezer Yudkowsky first brought up the breakdown of independence in humans, way, way back during the discussion of the Allais Paradox, the poster “Gray Area” explained why people aren’t being money-pumped, even though they violate independence. He/she came to the same conclusion in the quote above.
Here’s what Gray Area said back then:
Finally, the ‘money pump’ argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game once, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not. [bold added]
I didn’t see anyone even reply to Gray Area anywhere in that series, or anytime since.
So I bring up essentially the same point whenever Eliezer uses the Allais result, always concluding with a zinger like: If getting lottery tickets is being exploited, I don’t want to be empowered.
Please, folks, stop equating a hypothetical money pump with the actual scenario.
The Allais Paradox is not about risk aversion or lack thereof; it’s about people’s decisions being inconsistent. There are definitely situations in which you would want to choose a 50% chance of $1M over a 10% chance of $10M. However, if you would do so, you should also then choose a 5% chance of $1M over a 1% chance of $10M, because the relative risk is the same. See Eliezer’s followup post, Zut Allais.
Turning a person into a money pump also isn’t about playing the same gamble a zillion times (as any good investor will tell you, if you play the gamble a zillion times, all the risk disappears and you’re left with only expected return, which leaves you with a different problem). The money pump works thusly: I sell you gamble A for $5. You then trade with me gamble A for gamble B. You then sell me back gamble B for $4. I then sell you gamble A for $5… wash, rinse, repeat. Nowhere in the cycle is either gamble actually paid out.
Are you sure you’re responding to the right person here?
1) I wasn’t claiming that Allais is about risk aversion.
2) I was claiming it doesn’t show an inconsistency (and IMO succeeded).
3) I did read Zut Allais, and the other Allais article with the other ridiculous French pun, and it wasn’t responsive to the point that Gray Area raised. (You may note that a strapping lad named “Silas” even noted this at the time.)
However, if you would do so, you should also then choose
4) You cannot substantiate the charge that you should do the latter if you did the former, since no negative consequence actually results from violating that “should” in the one-shot case. You know, the one people were actually tested on.
ETA: (I think the second paragraph was just added in tommccabe’s post.)
Turning a person into a money pump also isn’t about playing the same gamble a zillion times.
My point never hinged on it being otherwise.
The money pump works thusly: I sell you gamble A for $5. You then trade with me gamble A for gamble B. You then sell me back gamble B for $4. I then sell you gamble A for $5… wash, rinse, repeat. Nowhere in the cycle is either gamble actually paid out.
Okay, and where in the Allais experiment did it permit any of those exchanges to happen? Right, nowhere.
Believe it or not, when I say, “I prefer B to A”, it doesn’t mean “I hereby legally obligate myself to redeem on demand any B for an A”, yet your money pump requires that.
The problem is that you’re losing money doing it once. You would agree that c(0) > c(-2), yes? If they are willing to trade A for B in a one-shot game, they shouldn’t be willing to pay more for A than for B in a one-shot—you don’t trade the more valuable item for the less valuable. That their preferences may reverse in the iterated situation has no bearing on the Allais problem.
Edit: The text above following the question mark is incorrect. See my later comment quoting Eliezer for the correct statement.
The problem is that you’re losing money doing it once.
Again, if suddenly being offered the choice of 1A/1B then 2A/2B as described here, but being “inconsistent”, is what you call “losing money”, then I don’t want to gain money!
If they are willing to trade A for B in a one-shot game, they shouldn’t be willing to pay more for A than for B in a one-shot
But that’s not what’s happening the paradox. They’re (doing something isomorphic to) preferring A to B once and then p*B to p*A once. At no point do they “pay” more for B than A while preferring A to B. At no point does anyone make or offer the money-pumping trades with the subjects, nor have they obligated themselves to do so!
Consider Eliezer’s final remarks in The Allais Paradox (I link purely for the convenience of those coming in in the middle):
Suppose that at 12:00PM I roll a hundred-sided die. If the die shows a number greater than 34, the game terminates. Otherwise, at 12:05PM I consult a switch with two settings, A and B. If the setting is A, I pay you $24,000. If the setting is B, I roll a 34-sided die and pay you $27,000 unless the die shows “34”, in which case I pay you nothing.
Let’s say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00PM, you pay me a penny to throw the switch to B. The die comes up 12. After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A.
I have taken your two cents on the subject.
You’re right insofar as Eliezer invokes the Axiom of Independence when he resolves the Allais Paradox using expected value; I do not yet see any way in which Stuart_Armstrong’s criteria rule out the preferences (1A > 1B)u(2A < 2B). However, in the scenario Eliezer describes, an agent with those preferences either loses one cent or two cents relative to the agent with (1A > 1B)u(2A > 2B).
Your preferences between A and B might reasonably change if you actually receive the money from either gamble, so that you have more money in your bank account now than you did before. However, that’s not what’s happening; the experimenter can use you as a money pump without ever actually paying out on either gamble.
Yes, I know that a money pump doesn’t involve doing the gamble itself. You don’t have to repeat yourself, but apparently, I do have to repeat myself when I say:
The money pump does require that the experimenter make actual futher trades with you, not just imagine hypothetical ones. The subjects didn’t make these trades, and if they saw many more lottery tickets potentially coming into play, so as to smooth out returns, they would quickly revert to standard EU maximization, as predicted by Armstrongs’s derivation.
“Potentially coming into play, so as to smooth out returns” requires that there be the possibility of the subject actually taking more than one gamble, which never happens. If you mean that people might get suspicious after the tenth time the experimenter takes their money and gives them nothing in return, and thereafter stop doing it, I agree with you; however, all this proves is that making the original trade was stupid, and that people are able to learn to not make stupid decisions given sufficient repetition.
“Potentially coming into play, so as to smooth out returns” requires that there be the possibility of the subject actually taking more than one gamble, which never happens.
The possibility has to happen, if you’re cycling all these tickets through the subject’s hands. What, are they fake tickets that can’t actually be used now?
There are factors that come into play when you get to do lots of runs, but aren’t present with only one run. A subject’s choice in a one-shot scenario does not imply that they’ll make the money-losing trades you describe. They might, but you would have to actually test it out. They don’t become irrational until such a thing actually happens.
“What, are they fake tickets that can’t actually be used now?”
No, they’re just the same tickets. There’s only ever one of each. If I sell you a chocolate bar, trade the chocolate bar for a bag of Skittles, buy the bag of Skittles, and repeat ten thousand times, this does not mean I have ten thousand of each; I’m just re-using the same ones.
“They might, but you would have to actually test it out. They don’t become irrational until such a thing actually happens.”
We did test it out, and yes, people did act as money pumps. See The Construction of Preference by Sarah Lichtenstein and Paul Slovic.
You can also listen to an interview with one of Sarah Lichtenstein’s subjects who refused to make his preferences consistent even after the money-pump aspect was explained:
You can also listen to an interview with one of Sarah Lichtenstein’s subjects who refused to make his preferences consistent even after the money-pump aspect was explained:
Admitting that the set of preferences is inconsistent, but refusing to fix it is not so bad a conclusion—maybe he’d just make it worse (eg, by raising the bid on B to 550). At times he seems to admit that the overall pattern is irrational (“It shows my reasoning process isn’t too good”). At other times, he doesn’t admit the problem, but I think you’re too harsh on him in framing it as refusal.
I may be misunderstanding, but he seems to say that the game doesn’t allow him to bid higher than 400 on B. If he values B higher than 400 (yes, an absurd mistake), but sells it for 401, merely because he wasn’t allowed to value it higher, then that seems to me to be the biggest mistake. It fits the book’s title, though.
Maybe he just means that his sense of math is that the cap should be 400, which would be the lone example of math helping him. He seems torn between authority figures, the “rationality” of non-circular preferences and the unnamed math of expected values. I’m somewhat surprised that he doesn’t see them as the same oracle. Maybe he was scarred by childhood math teachers, and a lone psychologist can’t match that intimidation?
That sounds to me as though he is using expected utility to come up with his numbers, but doesn’t understand expected utility, so when asked which he prefers he uses some other emotional system.
“1) I wasn’t claiming that Allais is about risk aversion.”
The difference between your preferences over choosing lottery A vs. lottery B when both are performed a million times, and your preferences over choosing A vs. B when both are performed once, is a measurement of your risk aversion; this is what Gray Area was talking about, is it not?
“Believe it or not, when I say, “I prefer B to A”, it doesn’t mean “I hereby legally obligate myself to redeem on demand any B for an A”″
Then you must be using a different (and, I might add, quite unusual) definition of the word “preference”. To quote dictionary.com:
pre⋅fer /prɪˈfɜr/ [pri-fur]
–verb (used with object), -ferred, -fer⋅ring.
to set or hold before or above other persons or things in estimation; like better; choose rather than: to prefer beef to chicken.
What does it mean to say that you prefer B to A, if you wouldn’t trade B for A if the trade is offered? Could I say that I prefer torture to candy, even if I always choose candy when the choice is offered to me?
I prefer B to A does not imply I prefer 10B to 10A, or even I prefer 2B to 2A. Expected utility != expected return.
I agree pretty much completely with Silas. If you want to prove that people are money pumps, you need to actually get a random sample of people and then actually pump money out of them. You can’t just take a single-shot hypothetical and extrapolate to other hypotheticals when the whole issue is how people deal with the variability of returns.
“I prefer B to A does not imply I prefer 10B to 10A, or even I prefer 2B to 2A. Expected utility != expected return.”
Of course, but, as I’ve said (I think?) five times now, you never actually get 2B or 2A at any point during the money-pumping process. You go from A, to B, to nothing, to A, to B… etc.
For examples of Vegas gamblers actually having money pumped out of them, see The Construction of Preference by Sarah Lichtenstein and Paul Slovic.
Strictly speaking, Eliezer’s formulation of the Allais Paradox is not the one that has been experimentally tested. I believe a similar money pump can be implemented for the canonical version, however—and Zut Allais! shows that people can be turned into money pumps in other situations.
The difference between your preferences over choosing lottery A vs. lottery B when both are performed a million times, and your preferences over choosing A vs. B when both are performed once, is a measurement of your risk aversion; this is what Gray Area was talking about, is it not?
No, it’s not, and the problem asserted by Allais paradox is that the utility function is inconsistent, no matter what the risk preference.
Then you must be using a different (and, I might add, quite unusual) definition of the word “preference”. To quote dictionary.com:
to set or hold before or above other persons or things in estimation; like better; choose rather than: to prefer beef to chicken.
I don’t see anything in there that about how many times the choice has to happen, which is the very issue at stake.
If there’s any unusualness, it’s definitely on your side. When you buy a chocolate bar for a dollar, that “preference of a chocolate bar to a dollar” does not somehow mean that you are willing to trade every dollar you have for a chocolate bar, nor have you legally obligated yourself to redeem chocolate bars for dollars on demand (as a money pump would require), nor does anyone expect that you will trade the rest of your dollars this way.
“When you buy a chocolate bar for a dollar, that “preference of a chocolate bar to a dollar” does not somehow mean that you are willing to trade every dollar you have for a chocolate bar, nor have you legally obligated yourself to redeem chocolate bars for dollars on demand (as a money pump would require), nor does anyone expect that you will trade the rest of your dollars this way.”
Under normal circumstances, this is true, because the situation has changed after I bought the chocolate bar: I now have an additional chocolate bar, or (more likely) an additional bar’s worth of chocolate in my stomach. My preferences change, because the situation has changed.
However, after you have bought A, and swapped A for B, and sold B, you have not gained anything (such as a chocolate bar, or a full stomach), and you have not lost anything (such as a dollar); you are in precisely the same position that you were before. Hence, consistency dictates that you should make the same decision as you did before. If, after buying the chocolate bar, it fell down a well, and another dollar was added to my bank account because of the chocolate bar insurance I bought, then yes, I should keep buying chocolate bars forever if I want to be consistent (assuming that there is no cost to my time, which there essentially isn’t in this case).
And something about your state has likewise change after the swaps you described, just like when I have bought the first chocolate bar.
Jeez, where’s Alicorn when you need her? We need someone to make a point about how, “Just because a woman sleeps with you once, doesn’t meen she’s inconsistent by …” and then show the mapping to the logic being used here.
ETA: Forget the position I imputed to Alicorn for the moment. I’m making the point: how is this bizarre extrapolation of preferences any different from a very unfortunate overextrapolation often used by men?
Jeez, where’s Alicorn when you need her? We need someone to make a point about how, “Just because a woman sleeps with you once, doesn’t meen she’s inconsistent by …” and then show the mapping to the logic being used here.
What, exactly, are you trying to accomplish here? Your last interaction with Alicorn made it pretty clear that projecting non-sequitur sexual references onto her was unwelcome. Are you trolling?
The last interaction wasn’t a “sexual reference”, even by Alicorns definition. I was trying to point out that her phrasing was a reference to LauraABJ’s implied beliefs about when a woman is rejecting a man not necessarily in a sexual context.
I’d be interested to know why the follow-up kept getting modded down. As far as I can tell, people just didn’t understand.
And I don’t know how this is non-sequitur or projecting sexual references. People here are drawing absurd inferences about someone’s preferences from one-time choices. It looks to me like the same kind of questionable reasoning used in the context I mentioned, and the same kind of thing Alicorn enjoys refuting.
Sorry for having an insufficiently refined red-flag detector, and for whatever offense I may have caused. Just make sure your offense is because of the topic, not because you just realized what your overextrapolation looks like in other contexts.
Just to raise the most obvious possible objection to your phrasing: there was nothing to prevent you from making whatever metaphor you suggested Alicorn could have employed. It is generally poor manners to invoke uninvolved people as supporters of your arguments without their permission, and in this situation, if Alicorn were interested in becoming involved in this thread, she could have posted herself.
The sexual references in particular are a subset of a broad class of things from SilasBarta that I do not welcome. That class of things is “anything involving me and SilasBarta directly interacting ever again”. Just so no one interprets that last interaction too finely.
It would probably be best to make your point in your own voice and not to put words in Alicorn’s mouth (however indirectly), since you know that she will not interact directly with you to correct any misapprehensions about her views you may have.
Your point about Alicorn not being likely to correct Silas is no less apt than mine about not dragging neutral parties into an argument—in fact, it is scarcely less general.
the poster “Gray Area” explained why people aren’t being money-pumped, even though they violate independence.
I actually think that (for some examples) it’s actually simpler than that. The Allais paradox assumes that the proposal of the bet itself has no effect on the utility of the proposee. In reality, if I took a 5% chance at $100M, instead of a 100% chance at $4M, there’s a 95% chance I’d be kicking myself every time I opened my wallet for the rest of my life. Thus, taking the bet and losing is significantly worse than never having the bet proposed at all. If this is factored in correctly, EY’s original formulation of the Allais Paradox is no longer functional: I prefer certainty, because losing when certainty was an option carries lower utility than never having bet.
This is more about how you calculate outcomes than it is about independence directly. If losing when you could have had a guaranteed (or nearly-guaranteed) win carries negative utility, and if you can only play once, it does not seem like it contradicts independence.
Glad this formulation is useful! I do indeed think that people often behave like you describe, without generally losing huge sums of cash.
However, the conclusion of my post is that it is irational to deviate from expected utility for small sums. Agregating every small decision you make will give you expected utility.
You are absolutely correct, and it pains me because this issue should have been settled a long time ago.
When Eliezer Yudkowsky first brought up the breakdown of independence in humans, way, way back during the discussion of the Allais Paradox, the poster “Gray Area” explained why people aren’t being money-pumped, even though they violate independence. He/she came to the same conclusion in the quote above.
Here’s what Gray Area said back then:
I didn’t see anyone even reply to Gray Area anywhere in that series, or anytime since.
So I bring up essentially the same point whenever Eliezer uses the Allais result, always concluding with a zinger like: If getting lottery tickets is being exploited, I don’t want to be empowered.
Please, folks, stop equating a hypothetical money pump with the actual scenario.
The Allais Paradox is not about risk aversion or lack thereof; it’s about people’s decisions being inconsistent. There are definitely situations in which you would want to choose a 50% chance of $1M over a 10% chance of $10M. However, if you would do so, you should also then choose a 5% chance of $1M over a 1% chance of $10M, because the relative risk is the same. See Eliezer’s followup post, Zut Allais.
Turning a person into a money pump also isn’t about playing the same gamble a zillion times (as any good investor will tell you, if you play the gamble a zillion times, all the risk disappears and you’re left with only expected return, which leaves you with a different problem). The money pump works thusly: I sell you gamble A for $5. You then trade with me gamble A for gamble B. You then sell me back gamble B for $4. I then sell you gamble A for $5… wash, rinse, repeat. Nowhere in the cycle is either gamble actually paid out.
Are you sure you’re responding to the right person here?
1) I wasn’t claiming that Allais is about risk aversion.
2) I was claiming it doesn’t show an inconsistency (and IMO succeeded).
3) I did read Zut Allais, and the other Allais article with the other ridiculous French pun, and it wasn’t responsive to the point that Gray Area raised. (You may note that a strapping lad named “Silas” even noted this at the time.)
4) You cannot substantiate the charge that you should do the latter if you did the former, since no negative consequence actually results from violating that “should” in the one-shot case. You know, the one people were actually tested on.
ETA: (I think the second paragraph was just added in tommccabe’s post.)
My point never hinged on it being otherwise.
Okay, and where in the Allais experiment did it permit any of those exchanges to happen? Right, nowhere.
Believe it or not, when I say, “I prefer B to A”, it doesn’t mean “I hereby legally obligate myself to redeem on demand any B for an A”, yet your money pump requires that.
The problem is that you’re losing money doing it once. You would agree that c(0) > c(-2), yes? If they are willing to trade A for B in a one-shot game, they shouldn’t be willing to pay more for A than for B in a one-shot—you don’t trade the more valuable item for the less valuable. That their preferences may reverse in the iterated situation has no bearing on the Allais problem.
Edit: The text above following the question mark is incorrect. See my later comment quoting Eliezer for the correct statement.
Again, if suddenly being offered the choice of 1A/1B then 2A/2B as described here, but being “inconsistent”, is what you call “losing money”, then I don’t want to gain money!
But that’s not what’s happening the paradox. They’re (doing something isomorphic to) preferring A to B once and then p*B to p*A once. At no point do they “pay” more for B than A while preferring A to B. At no point does anyone make or offer the money-pumping trades with the subjects, nor have they obligated themselves to do so!
Consider Eliezer’s final remarks in The Allais Paradox (I link purely for the convenience of those coming in in the middle):
You’re right insofar as Eliezer invokes the Axiom of Independence when he resolves the Allais Paradox using expected value; I do not yet see any way in which Stuart_Armstrong’s criteria rule out the preferences (1A > 1B)u(2A < 2B). However, in the scenario Eliezer describes, an agent with those preferences either loses one cent or two cents relative to the agent with (1A > 1B)u(2A > 2B).
Your preferences between A and B might reasonably change if you actually receive the money from either gamble, so that you have more money in your bank account now than you did before. However, that’s not what’s happening; the experimenter can use you as a money pump without ever actually paying out on either gamble.
Yes, I know that a money pump doesn’t involve doing the gamble itself. You don’t have to repeat yourself, but apparently, I do have to repeat myself when I say:
The money pump does require that the experimenter make actual futher trades with you, not just imagine hypothetical ones. The subjects didn’t make these trades, and if they saw many more lottery tickets potentially coming into play, so as to smooth out returns, they would quickly revert to standard EU maximization, as predicted by Armstrongs’s derivation.
“Potentially coming into play, so as to smooth out returns” requires that there be the possibility of the subject actually taking more than one gamble, which never happens. If you mean that people might get suspicious after the tenth time the experimenter takes their money and gives them nothing in return, and thereafter stop doing it, I agree with you; however, all this proves is that making the original trade was stupid, and that people are able to learn to not make stupid decisions given sufficient repetition.
The possibility has to happen, if you’re cycling all these tickets through the subject’s hands. What, are they fake tickets that can’t actually be used now?
There are factors that come into play when you get to do lots of runs, but aren’t present with only one run. A subject’s choice in a one-shot scenario does not imply that they’ll make the money-losing trades you describe. They might, but you would have to actually test it out. They don’t become irrational until such a thing actually happens.
“What, are they fake tickets that can’t actually be used now?”
No, they’re just the same tickets. There’s only ever one of each. If I sell you a chocolate bar, trade the chocolate bar for a bag of Skittles, buy the bag of Skittles, and repeat ten thousand times, this does not mean I have ten thousand of each; I’m just re-using the same ones.
“They might, but you would have to actually test it out. They don’t become irrational until such a thing actually happens.”
We did test it out, and yes, people did act as money pumps. See The Construction of Preference by Sarah Lichtenstein and Paul Slovic.
You can also listen to an interview with one of Sarah Lichtenstein’s subjects who refused to make his preferences consistent even after the money-pump aspect was explained:
http://www.decisionresearch.org/publications/books/construction-preference/listen.html
That is an incredible interview.
Admitting that the set of preferences is inconsistent, but refusing to fix it is not so bad a conclusion—maybe he’d just make it worse (eg, by raising the bid on B to 550). At times he seems to admit that the overall pattern is irrational (“It shows my reasoning process isn’t too good”). At other times, he doesn’t admit the problem, but I think you’re too harsh on him in framing it as refusal.
I may be misunderstanding, but he seems to say that the game doesn’t allow him to bid higher than 400 on B. If he values B higher than 400 (yes, an absurd mistake), but sells it for 401, merely because he wasn’t allowed to value it higher, then that seems to me to be the biggest mistake. It fits the book’s title, though.
Maybe he just means that his sense of math is that the cap should be 400, which would be the lone example of math helping him. He seems torn between authority figures, the “rationality” of non-circular preferences and the unnamed math of expected values. I’m somewhat surprised that he doesn’t see them as the same oracle. Maybe he was scarred by childhood math teachers, and a lone psychologist can’t match that intimidation?
That sounds to me as though he is using expected utility to come up with his numbers, but doesn’t understand expected utility, so when asked which he prefers he uses some other emotional system.
“1) I wasn’t claiming that Allais is about risk aversion.”
The difference between your preferences over choosing lottery A vs. lottery B when both are performed a million times, and your preferences over choosing A vs. B when both are performed once, is a measurement of your risk aversion; this is what Gray Area was talking about, is it not?
“Believe it or not, when I say, “I prefer B to A”, it doesn’t mean “I hereby legally obligate myself to redeem on demand any B for an A”″
Then you must be using a different (and, I might add, quite unusual) definition of the word “preference”. To quote dictionary.com:
pre⋅fer /prɪˈfɜr/ [pri-fur] –verb (used with object), -ferred, -fer⋅ring.
to set or hold before or above other persons or things in estimation; like better; choose rather than: to prefer beef to chicken.
What does it mean to say that you prefer B to A, if you wouldn’t trade B for A if the trade is offered? Could I say that I prefer torture to candy, even if I always choose candy when the choice is offered to me?
Typo: Did you mean “prefer A to B”?
I prefer B to A does not imply I prefer 10B to 10A, or even I prefer 2B to 2A. Expected utility != expected return.
I agree pretty much completely with Silas. If you want to prove that people are money pumps, you need to actually get a random sample of people and then actually pump money out of them. You can’t just take a single-shot hypothetical and extrapolate to other hypotheticals when the whole issue is how people deal with the variability of returns.
“I prefer B to A does not imply I prefer 10B to 10A, or even I prefer 2B to 2A. Expected utility != expected return.”
Of course, but, as I’ve said (I think?) five times now, you never actually get 2B or 2A at any point during the money-pumping process. You go from A, to B, to nothing, to A, to B… etc.
For examples of Vegas gamblers actually having money pumped out of them, see The Construction of Preference by Sarah Lichtenstein and Paul Slovic.
Strictly speaking, Eliezer’s formulation of the Allais Paradox is not the one that has been experimentally tested. I believe a similar money pump can be implemented for the canonical version, however—and Zut Allais! shows that people can be turned into money pumps in other situations.
No, it’s not, and the problem asserted by Allais paradox is that the utility function is inconsistent, no matter what the risk preference.
I don’t see anything in there that about how many times the choice has to happen, which is the very issue at stake.
If there’s any unusualness, it’s definitely on your side. When you buy a chocolate bar for a dollar, that “preference of a chocolate bar to a dollar” does not somehow mean that you are willing to trade every dollar you have for a chocolate bar, nor have you legally obligated yourself to redeem chocolate bars for dollars on demand (as a money pump would require), nor does anyone expect that you will trade the rest of your dollars this way.
It’s called diminishing marginal utility. In fact, it’s called marginal analysis in general.
It means you would trade B for A on the next opportunity to do so, not that you would indefinitely do it forever, as the money pump requires.
“When you buy a chocolate bar for a dollar, that “preference of a chocolate bar to a dollar” does not somehow mean that you are willing to trade every dollar you have for a chocolate bar, nor have you legally obligated yourself to redeem chocolate bars for dollars on demand (as a money pump would require), nor does anyone expect that you will trade the rest of your dollars this way.”
Under normal circumstances, this is true, because the situation has changed after I bought the chocolate bar: I now have an additional chocolate bar, or (more likely) an additional bar’s worth of chocolate in my stomach. My preferences change, because the situation has changed.
However, after you have bought A, and swapped A for B, and sold B, you have not gained anything (such as a chocolate bar, or a full stomach), and you have not lost anything (such as a dollar); you are in precisely the same position that you were before. Hence, consistency dictates that you should make the same decision as you did before. If, after buying the chocolate bar, it fell down a well, and another dollar was added to my bank account because of the chocolate bar insurance I bought, then yes, I should keep buying chocolate bars forever if I want to be consistent (assuming that there is no cost to my time, which there essentially isn’t in this case).
And something about your state has likewise change after the swaps you described, just like when I have bought the first chocolate bar.
Jeez, where’s Alicorn when you need her? We need someone to make a point about how, “Just because a woman sleeps with you once, doesn’t meen she’s inconsistent by …” and then show the mapping to the logic being used here.
ETA: Forget the position I imputed to Alicorn for the moment. I’m making the point: how is this bizarre extrapolation of preferences any different from a very unfortunate overextrapolation often used by men?
What, exactly, are you trying to accomplish here? Your last interaction with Alicorn made it pretty clear that projecting non-sequitur sexual references onto her was unwelcome. Are you trolling?
The last interaction wasn’t a “sexual reference”, even by Alicorns definition. I was trying to point out that her phrasing was a reference to LauraABJ’s implied beliefs about when a woman is rejecting a man not necessarily in a sexual context.
I’d be interested to know why the follow-up kept getting modded down. As far as I can tell, people just didn’t understand.
And I don’t know how this is non-sequitur or projecting sexual references. People here are drawing absurd inferences about someone’s preferences from one-time choices. It looks to me like the same kind of questionable reasoning used in the context I mentioned, and the same kind of thing Alicorn enjoys refuting.
Sorry for having an insufficiently refined red-flag detector, and for whatever offense I may have caused. Just make sure your offense is because of the topic, not because you just realized what your overextrapolation looks like in other contexts.
Just to raise the most obvious possible objection to your phrasing: there was nothing to prevent you from making whatever metaphor you suggested Alicorn could have employed. It is generally poor manners to invoke uninvolved people as supporters of your arguments without their permission, and in this situation, if Alicorn were interested in becoming involved in this thread, she could have posted herself.
Thanks, that make much more sense.
The sexual references in particular are a subset of a broad class of things from SilasBarta that I do not welcome. That class of things is “anything involving me and SilasBarta directly interacting ever again”. Just so no one interprets that last interaction too finely.
It would probably be best to make your point in your own voice and not to put words in Alicorn’s mouth (however indirectly), since you know that she will not interact directly with you to correct any misapprehensions about her views you may have.
ETA: Whoops, I see RobinZ got there first.
Your point about Alicorn not being likely to correct Silas is no less apt than mine about not dragging neutral parties into an argument—in fact, it is scarcely less general.
Yes, but having made the swaps seems highly questionable as a a dimension of your state that affects your preferences.
It’s highly-questionable as a relevant state dimension because … you need it to be to make the results come out right?
I actually think that (for some examples) it’s actually simpler than that. The Allais paradox assumes that the proposal of the bet itself has no effect on the utility of the proposee. In reality, if I took a 5% chance at $100M, instead of a 100% chance at $4M, there’s a 95% chance I’d be kicking myself every time I opened my wallet for the rest of my life. Thus, taking the bet and losing is significantly worse than never having the bet proposed at all. If this is factored in correctly, EY’s original formulation of the Allais Paradox is no longer functional: I prefer certainty, because losing when certainty was an option carries lower utility than never having bet.
This is more about how you calculate outcomes than it is about independence directly. If losing when you could have had a guaranteed (or nearly-guaranteed) win carries negative utility, and if you can only play once, it does not seem like it contradicts independence.
Glad this formulation is useful! I do indeed think that people often behave like you describe, without generally losing huge sums of cash.
However, the conclusion of my post is that it is irational to deviate from expected utility for small sums. Agregating every small decision you make will give you expected utility.