I have mixed feelings about this article. On the one hand, its main point is that causal decision theory hasn’t been reconciled with quantum mechanics yet. That’s hardly new. It does strengthen the case that ignoring quantum effects in a decision theory is a bad idea (in terms of getting Dutch-booked). To a causalist, quantum effects are essentially black swans, after all, and black swans are bad.
Now, they do raise the interesting question: roughly speaking, what are the kinds of black swans that an agent should be “comfortable” with ignoring? The example they consider is having cup of tea while knowing there’s a nonzero probability that doing so will destroy the universe. Their counter claims are not terribly great—they allege 1) a “pre-judgement” process in which we determine that no further hypotheses will affect the decision substantially and 2) a mostly specious argument by symmetry that the probability of not drinking the cup of tea causing the destruction of the universe is comparable to the probability that drinking the cup of tea will have the same effect.
Concerning the first claim, even if there is, for practical reasons, a pre-judgement process, it doesn’t (at least in humans) operate in this method. I’ve seen this pre-judgement process alluded to in decision theory papers before, but I don’t think it’s clear how horribly uncomputable such a process would be to work as described. At the end of the day, black swans are still a problem, and some proportion of them are existential risks.
Concerning the second claim, there is no ground from which to assume such symmetry. The first event could be, for all we know, 10^-32, and the second event 10^-10; or vice versa. So a lack of knowledge about those probabilities doesn’t imply that the two comparable.
It’s an interesting paradox. How do you reduce, avoid, or insure against something you can’t quantify over?
Concerning the second claim, there is no ground from which to assume such symmetry. The first event could be, for all we know, 10^-32, and the second event 10^-10; or vice versa. So a lack of knowledge about those probabilities doesn’t imply that the two comparable.
But if we don’t know which one’s which, aren’t our subjective probabilities of each destroying the world equal anyway?
It can be argued against this conclusion that one usually assumes that we are allowed to ignore extremely unlikely hypotheses in our decisions. Consider, say, the hypothesis that having a cup of tea would result in the destruction of the universe. Surely, the argument goes, we don’t need to consider all logically possible hypotheses? My response to this criticism is that we don’t consider all possible hypotheses because we make a pre-judgement that no further hypotheses would change our decisions and that further considerations would only introduce unnecessary complications in the calculations. Most tea drinkers attribute an exceedingly small probability for the destruction of the universe conditional on their drinking tea. But if a tea drinker were to give any appreciable probability to this hypothesis, it would certainly be irrational for them to have that cup of tea.
Further, in a situation like the referee’s example, not only would these kinds of unlikely hypotheses have negligible effects on the decisions but there would also usually be equally arbitrary competing hypotheses pulling the decision the other way: The hypothesis that NOT having a given cup of tea will lead to the destruction of the universe is just as (un)likely as the one that having that cup of tea will do so and precisely cancels the effect of the first.
It does not sound as if the author is assuming an uninformative prior with respect to the universe-destroying capabilities of tea, but that would explain the symmetry argument.
I have mixed feelings about this article. On the one hand, its main point is that causal decision theory hasn’t been reconciled with quantum mechanics yet. That’s hardly new. It does strengthen the case that ignoring quantum effects in a decision theory is a bad idea (in terms of getting Dutch-booked). To a causalist, quantum effects are essentially black swans, after all, and black swans are bad.
Now, they do raise the interesting question: roughly speaking, what are the kinds of black swans that an agent should be “comfortable” with ignoring? The example they consider is having cup of tea while knowing there’s a nonzero probability that doing so will destroy the universe. Their counter claims are not terribly great—they allege 1) a “pre-judgement” process in which we determine that no further hypotheses will affect the decision substantially and 2) a mostly specious argument by symmetry that the probability of not drinking the cup of tea causing the destruction of the universe is comparable to the probability that drinking the cup of tea will have the same effect.
Concerning the first claim, even if there is, for practical reasons, a pre-judgement process, it doesn’t (at least in humans) operate in this method. I’ve seen this pre-judgement process alluded to in decision theory papers before, but I don’t think it’s clear how horribly uncomputable such a process would be to work as described. At the end of the day, black swans are still a problem, and some proportion of them are existential risks.
Concerning the second claim, there is no ground from which to assume such symmetry. The first event could be, for all we know, 10^-32, and the second event 10^-10; or vice versa. So a lack of knowledge about those probabilities doesn’t imply that the two comparable.
It’s an interesting paradox. How do you reduce, avoid, or insure against something you can’t quantify over?
But if we don’t know which one’s which, aren’t our subjective probabilities of each destroying the world equal anyway?
I may have misread the original section:
It does not sound as if the author is assuming an uninformative prior with respect to the universe-destroying capabilities of tea, but that would explain the symmetry argument.