I don’t believe any of the alternative solutions to “Pascal’s Mugging” are compelling for all possible constructions of “Pascal’s Mugging.” The only one that seems difficult to get around by modifying the construction is the “bounded utility function” solution, but I don’t believe it is reasonable to have a bounded utility function: I believe, for example, that one should be willing to pay $100 for a 1/N chance of saving N lives for any N>=1, if (as is not the case with “Pascal’s Mugging”) the “1/N chance of saving N lives” calculation is well supported and therefore robust (i.e., has relatively narrow error bars). Thus, “Pascal’s Mugging” remains an example of the sort of “absurd implication” I’d expect for an insufficiently skeptical prior.
Yes. I would definitely pay significant money to stop e.g. nuclear war conditional on twelve 6-sided dice all rolling 1 . (In the case of dice, pretty much any natural choice of a prior for the initial state of the dice before they bounce results in probability very close to 1⁄6 for each side).
Formally, it is the case that a number which can be postulated in an argument grows faster than any computable function of the length of the argument, if the “argument” is at least Turing complete (i.e. can postulate a Turing machine with a tape for it). And, subsequently, if you base priors on the length alone, the sum is not even well defined, and it’s sign is dependent on the order of summation, and so on.
If we sum in the order of increasing length, everything is dominated by theories that dedicate largest part of their length to making up a really huge number (as even very small increase in this part dramatically boosts the number), so it might even be possible for a super-intelligence or even human-level intelligence to obtain an actionable outcome out of it—something like destroying low temperature labs because the simplest theory which links a very large number to actions does so by modifying laws of physics a little so that very cold liquid helium triggers some sort of world destruction or multiverse destruction, killing people who presumably don’t want to die. Or conversely, liquid helium maximization as it stabilizes some multiverse full of people who’d rather live than die (I’d expect the former to dominate because unusual experiments triggering some sort of instability seems like something that can be postulated more succintly). Or maximization of the number of anti-protons. Something likewise very silly, where the “appeal” is in how much of the theory length it leaves to making the consequences huge. Either way, starting from some good intention (saving people from involuntary death, CEV, or what ever), given a prior that only discounts theories for their length, you don’t get anything particularly nice in the end, you get arbitrarily low (limit of 0) probability of something super good.
Yes. I would definitely pay significant money to stop e.g. nuclear war conditional on twelve 6-sided dice all rolling 1 . (In the case of dice, pretty much any natural choice of a prior for the initial state of the dice before they bounce results in probability very close to 1⁄6 for each side).
Formally, it is the case that a number which can be postulated in an argument grows faster than any computable function of the length of the argument, if the “argument” is at least Turing complete (i.e. can postulate a Turing machine with a tape for it). And, subsequently, if you base priors on the length alone, the sum is not even well defined, and it’s sign is dependent on the order of summation, and so on.
If we sum in the order of increasing length, everything is dominated by theories that dedicate largest part of their length to making up a really huge number (as even very small increase in this part dramatically boosts the number), so it might even be possible for a super-intelligence or even human-level intelligence to obtain an actionable outcome out of it—something like destroying low temperature labs because the simplest theory which links a very large number to actions does so by modifying laws of physics a little so that very cold liquid helium triggers some sort of world destruction or multiverse destruction, killing people who presumably don’t want to die. Or conversely, liquid helium maximization as it stabilizes some multiverse full of people who’d rather live than die (I’d expect the former to dominate because unusual experiments triggering some sort of instability seems like something that can be postulated more succintly). Or maximization of the number of anti-protons. Something likewise very silly, where the “appeal” is in how much of the theory length it leaves to making the consequences huge. Either way, starting from some good intention (saving people from involuntary death, CEV, or what ever), given a prior that only discounts theories for their length, you don’t get anything particularly nice in the end, you get arbitrarily low (limit of 0) probability of something super good.