Actually, to think about it, i might’ve just nailed it and also nailed the problem with using probabilistic reasoning in practice. You can easily pick some random hypothesis out of enormously huge space, which gives it very small prior, but then you forget about this enormous space.
You might like to read this post, “Privileging the hypothesis.”
2: I don’t see how it’s overly specific. If we consider (coin or a person), one randomly chosen (coin or a person) affecting 3^^^^3 (coin or a person) is unlikely. Still, the explanation is indeed somewhat problematic.
It assumes a particular account of anthropic reasoning with infinite certainty. If you get your anthropic hypotheses out of something like Solmonoff induction (the programs best approximating our sense inputs can be thought of as a combination of a simulation of our world plus a bit of code that acts as an “anthropic theory” and reads out part of the simulation as our sense inputs), then things like SIA, SSA, and “you’re more likely to be a given person if they have more causal influence” are not radically different in complexity. So you get Pascal’s mugging from the combination of 1) laws of physics allowing vast quantities of computation and 2) some kind of anthropic theory that makes it unlikely you are one of the mass of simulations.
Hypothesis: Yea. The problem is that apart from the trivial cases having to do with clearly made up nonsense, it is very difficult to track how much the hypothesis got ‘cherrypicked’, as the process of choosing a hypothesis, when not entirely insane, should increase probability of it being true over the hypotheses that this process did not pick up.
Anthropic reasoning: I agree its kind of flimsy. On second thought I don’t like this argument too much.
You might like to read this post, “Privileging the hypothesis.”
It assumes a particular account of anthropic reasoning with infinite certainty. If you get your anthropic hypotheses out of something like Solmonoff induction (the programs best approximating our sense inputs can be thought of as a combination of a simulation of our world plus a bit of code that acts as an “anthropic theory” and reads out part of the simulation as our sense inputs), then things like SIA, SSA, and “you’re more likely to be a given person if they have more causal influence” are not radically different in complexity. So you get Pascal’s mugging from the combination of 1) laws of physics allowing vast quantities of computation and 2) some kind of anthropic theory that makes it unlikely you are one of the mass of simulations.
Hypothesis: Yea. The problem is that apart from the trivial cases having to do with clearly made up nonsense, it is very difficult to track how much the hypothesis got ‘cherrypicked’, as the process of choosing a hypothesis, when not entirely insane, should increase probability of it being true over the hypotheses that this process did not pick up.
Anthropic reasoning: I agree its kind of flimsy. On second thought I don’t like this argument too much.