I think you do a good job of arguing (in the earlier part of the article) that it is logically possible to drop the independence axiom without being money-pumped by giving up logical consequentialism but keeping dynamic consistency. However, I think you do a poor job of arguing (in the later parts) that we should give up consequentialism.
You examine 3 in-depth examples to try to show that we’d be fine if we dropped independence: ergodicity economics, the Allais Paradox, and the Ellsburg Paradox. In all 3 cases, it think your argument is missing a critical step that is required for its validity.
1.
In the section on ergodicity economics, you claim ergodicity follows resolute choice because it forms a plan based on the entire decision tree and then sticks to that plan. But this isn’t sufficient to carry your point, because agents that obey the independence axiom can also be described as sticking to their original plan. (In fact, any agent with dynamic consistency can be described this way, and you agreed we need dynamic consistency.)
What you’d need to show in order to carry your point is that ergodicity violates consequentialism. For example, you could show this by constructing an example where a local re-evaluation would deviate from the original plan, but ergodicity follows the original plan anyway. Without showing that, this example fails to support your case.
2.
In the section on the Allais Paradox, you give the following reasoning for why the common human answer is rational:
This is precisely the point we made with the example in the introduction to section 3. If the common component C is a large safety net, you can afford to take more risk on the remaining branch. If C is negligible, you should be more conservative. Your preference between A and B should depend on what else is in the package, because you are one agent facing the total distribution, not a collection of independent sub-agents each evaluating one branch in isolation.
But this reasoning seems to be exactly backwards from the actual result: When component C provides a safety net of $1M, humans choose the lower-risk option A, but when component C provides nothing, humans choose the higher-risk option B. Your argument in this paragraph undermines, rather than supports, the rationality of the choice you are defending.
And aside from this one backwards paragraph, you don’t seem to offer any basis at all for how the context ought to change the answer. You have several paragraphs of philosophical hand-waving about how it is good and appropriate that it should, but don’t appear to offer anything like an algorithm saying how we should take it into account. Without a model predicting the preference for A over B, you fail to win any Bayes points.
Nothing in this section sounds like a logical reason to consider the common human choice in the Allais Paradox more rational than I previously did.
Sidenote: Empirical Money Pumps?
This discussion also suggests the question: Can you actually, in real life, use the Allais Paradox to money-pump humans? If you can, then the behavior of humans does not provide evidence of the rationality of their choices in this scenario, regardless of any theoretical arguments about how we could avoid money pumps while keeping this preference. My brief Google search failed to immediately turn up any experiments involving actual money pumps, but I haven’t done a careful literature review.
Sidenote: Can the Allais Paradox result be justified in other ways?
There’s two defenses for this that I somewhat credit:
A.
Eliminating a possible outcome makes it cognitively cheaper to plan for what happens after the lottery, because you don’t need to consider as many distinct cases.
Notice this reasoning only applies if there is an “after”, which is usually true in real life but usually false in abstract formal examples.
B.
Suppose you are living among a population of similar agents that compete for resources, and all of the other agents get to make a similar choice between lotteries. Then, the outcome where you get nothing is always the same in terms of absolute resources, but not in terms of relative resources when comparing to other people.
If you choose between a 1% chance and a 0% chance of getting nothing, then the few agents who end up with nothing will be out-competed by almost everyone around them. They will lose approximately all competitions and will be the obvious choice for predators to target.
If you choose between a 90% chance and a 89% chance of getting nothing, then agents who win millions will still out-compete the ones who get nothing, but they’ll have a harder time monopolizing all opportunities because there won’t be as many winners. Many of the “losers” will still have a decent relative standing.
This reasoning doesn’t apply if you somehow know this lottery is a special one-time opportunity for you only, but it seems plausible that our instincts evolved mostly to deal with non-unique opportunities.
However, notice that these two reasons justify different things. Reason A justified zero-risk bias, i.e. paying a premium to reduce a risk to ~zero; it has a sharp change in your preferences at a specific probability. Contrariwise, reason B would remain nearly as strong if we changed “1% or 0%” to “2% or 1%”.
3.
In the section on the Ellsberg Paradox, I think you make some clever points about why the standard human answer might be rational, but I don’t see how any part of this section ties into logical consequentialism or the violation thereof. For example, you have not explained how a money pump could be constructed based on this scenario.
4.
The argument in favor of logical consequentialism is obvious: If you violate it, you are leaving money on the table. (Violating it implies that you are making a choice that satisfies your own preferences less than another choice you could have made in the current circumstances.)
In fact, this is essentially the same reason that we think that vulnerability to money pumps is bad (you end up with less money than you predictably could have). So it seems pretty weird to argue that we need to keep all the axioms that prevent money pumps but it’s somehow ok to drop consequentialism. I’m not sure what set of assumptions would validly lead to that combination of conclusions.
I think you do a good job of arguing (in the earlier part of the article) that it is logically possible to drop the independence axiom without being money-pumped by giving up logical consequentialism but keeping dynamic consistency. However, I think you do a poor job of arguing (in the later parts) that we should give up consequentialism.
You examine 3 in-depth examples to try to show that we’d be fine if we dropped independence: ergodicity economics, the Allais Paradox, and the Ellsburg Paradox. In all 3 cases, it think your argument is missing a critical step that is required for its validity.
1.
In the section on ergodicity economics, you claim ergodicity follows resolute choice because it forms a plan based on the entire decision tree and then sticks to that plan. But this isn’t sufficient to carry your point, because agents that obey the independence axiom can also be described as sticking to their original plan. (In fact, any agent with dynamic consistency can be described this way, and you agreed we need dynamic consistency.)
What you’d need to show in order to carry your point is that ergodicity violates consequentialism. For example, you could show this by constructing an example where a local re-evaluation would deviate from the original plan, but ergodicity follows the original plan anyway. Without showing that, this example fails to support your case.
2.
In the section on the Allais Paradox, you give the following reasoning for why the common human answer is rational:
But this reasoning seems to be exactly backwards from the actual result: When component C provides a safety net of $1M, humans choose the lower-risk option A, but when component C provides nothing, humans choose the higher-risk option B. Your argument in this paragraph undermines, rather than supports, the rationality of the choice you are defending.
And aside from this one backwards paragraph, you don’t seem to offer any basis at all for how the context ought to change the answer. You have several paragraphs of philosophical hand-waving about how it is good and appropriate that it should, but don’t appear to offer anything like an algorithm saying how we should take it into account. Without a model predicting the preference for A over B, you fail to win any Bayes points.
Nothing in this section sounds like a logical reason to consider the common human choice in the Allais Paradox more rational than I previously did.
Sidenote: Empirical Money Pumps?
This discussion also suggests the question: Can you actually, in real life, use the Allais Paradox to money-pump humans? If you can, then the behavior of humans does not provide evidence of the rationality of their choices in this scenario, regardless of any theoretical arguments about how we could avoid money pumps while keeping this preference. My brief Google search failed to immediately turn up any experiments involving actual money pumps, but I haven’t done a careful literature review.
Sidenote: Can the Allais Paradox result be justified in other ways?
There’s two defenses for this that I somewhat credit:
A.
Eliminating a possible outcome makes it cognitively cheaper to plan for what happens after the lottery, because you don’t need to consider as many distinct cases.
Notice this reasoning only applies if there is an “after”, which is usually true in real life but usually false in abstract formal examples.
B.
Suppose you are living among a population of similar agents that compete for resources, and all of the other agents get to make a similar choice between lotteries. Then, the outcome where you get nothing is always the same in terms of absolute resources, but not in terms of relative resources when comparing to other people.
If you choose between a 1% chance and a 0% chance of getting nothing, then the few agents who end up with nothing will be out-competed by almost everyone around them. They will lose approximately all competitions and will be the obvious choice for predators to target.
If you choose between a 90% chance and a 89% chance of getting nothing, then agents who win millions will still out-compete the ones who get nothing, but they’ll have a harder time monopolizing all opportunities because there won’t be as many winners. Many of the “losers” will still have a decent relative standing.
This reasoning doesn’t apply if you somehow know this lottery is a special one-time opportunity for you only, but it seems plausible that our instincts evolved mostly to deal with non-unique opportunities.
However, notice that these two reasons justify different things. Reason A justified zero-risk bias, i.e. paying a premium to reduce a risk to ~zero; it has a sharp change in your preferences at a specific probability. Contrariwise, reason B would remain nearly as strong if we changed “1% or 0%” to “2% or 1%”.
3.
In the section on the Ellsberg Paradox, I think you make some clever points about why the standard human answer might be rational, but I don’t see how any part of this section ties into logical consequentialism or the violation thereof. For example, you have not explained how a money pump could be constructed based on this scenario.
4.
The argument in favor of logical consequentialism is obvious: If you violate it, you are leaving money on the table. (Violating it implies that you are making a choice that satisfies your own preferences less than another choice you could have made in the current circumstances.)
In fact, this is essentially the same reason that we think that vulnerability to money pumps is bad (you end up with less money than you predictably could have). So it seems pretty weird to argue that we need to keep all the axioms that prevent money pumps but it’s somehow ok to drop consequentialism. I’m not sure what set of assumptions would validly lead to that combination of conclusions.