As reductios of anthropic views go, these are all pretty mild. Abandoning conservation of expected evidence isn’t exactly an un-biteable bullet. And “Violating causality” is particularly mild, especially for those of us who like non-causal decision theories. As a one-boxer I’ve been accused of believing in retrocausality dozens of times… sticks and stones, you know. This sort of “causality violation” seems similarly frivolous. Oh, and the SSA reference class arbitrariness thing can be avoided by steelmanning SSA to make it more elegant—just get rid of the reference class idea and do it with centered worlds. SSA is what you get if you just do ordinary Bayesian conditionalization on centered worlds instead of on possible worlds. (Which is actually the more elegant and natural way of doing it, since possible worlds are a weird restriction on the sorts of sentences we use. Centered worlds, by contrast, are simply maximally consistent sets of sentences, full stop.) As for changing the probability of past events… this isn’t mysterious in principle. We change the probability of past events all the time. Probabilities are just our credences in things! More seriously though, let A be the hypothetical state of the past light-cone that would result in your choosing to stretch your arm ten minutes from now, and B be the hypothetical state of the past light-cone that would result in your choosing to not stretch your arm. A and B are past events, but you should be uncertain about which one obtained until about ten minutes from now, at which point (depending on what you choose!) the probability of A will increase or decrease.
There are strong reductios in the vicinity though, if I recall correctly. (I did my MA on this stuff, but it was a while ago so I’m a little rusty.)
FNC-type views have the result that (a) we almost instantly become convinced, no matter what we experience, that the universe is an infinite soup of random noise occasionally coalescing to form Boltzmann Brains, because this is the simplest hypothesis that assigns probability 1 to the data; (b) we stay in this state forever and act accordingly—which means thinking happy thoughts, or something like that, whether we are average utilitarians or total utilitarians or egoists.
SIA-type views are as far as I can tell incoherent, in the following sense: The population size of universes grows much faster than their probability can shrink. So if you want to say that their probability is proportional to their population size… how? (Flag: I notice I am confused about this part.) A more down-to-earth way of putting this problem is that the hypothesis in which there is one universe is dominated by the hypothesis in which there are 3^^^^3 copies of that universe in parallel dimensions, which in turn is dominated by the hypothesis in which there are 4^^^^^4...
SSA-type views are the only game in town, as far as I’m concerned—except for the “Let’s abandon probability entirely and just do decision theory” idea you favor. I’m not sure what to make of it yet. Anyhow, the big problem I see for SSA-type views is the one you mention about using the ability to create tons of copies of yourself to influence the world. That seems weird all right. I’d like to avoid that consequence if possible. But it doesn’t seem worse than weird to me yet. It doesn’t seem… un-biteable.
EDIT: I should add that I think your conclusion is probably right—I think your move away from probability and towards decision theory seems very promising. As we went updateless in decision theory, so too should we go updateless in probability. Something like that (I have to think & read about it more). I’m just objecting to the strong wording in your arguments to get there. :)
As reductios of anthropic views go, these are all pretty mild. Abandoning conservation of expected evidence isn’t exactly an un-biteable bullet. And “Violating causality” is particularly mild, especially for those of us who like non-causal decision theories. As a one-boxer I’ve been accused of believing in retrocausality dozens of times… sticks and stones, you know. This sort of “causality violation” seems similarly frivolous. Oh, and the SSA reference class arbitrariness thing can be avoided by steelmanning SSA to make it more elegant—just get rid of the reference class idea and do it with centered worlds. SSA is what you get if you just do ordinary Bayesian conditionalization on centered worlds instead of on possible worlds. (Which is actually the more elegant and natural way of doing it, since possible worlds are a weird restriction on the sorts of sentences we use. Centered worlds, by contrast, are simply maximally consistent sets of sentences, full stop.) As for changing the probability of past events… this isn’t mysterious in principle. We change the probability of past events all the time. Probabilities are just our credences in things! More seriously though, let A be the hypothetical state of the past light-cone that would result in your choosing to stretch your arm ten minutes from now, and B be the hypothetical state of the past light-cone that would result in your choosing to not stretch your arm. A and B are past events, but you should be uncertain about which one obtained until about ten minutes from now, at which point (depending on what you choose!) the probability of A will increase or decrease.
There are strong reductios in the vicinity though, if I recall correctly. (I did my MA on this stuff, but it was a while ago so I’m a little rusty.)
FNC-type views have the result that (a) we almost instantly become convinced, no matter what we experience, that the universe is an infinite soup of random noise occasionally coalescing to form Boltzmann Brains, because this is the simplest hypothesis that assigns probability 1 to the data; (b) we stay in this state forever and act accordingly—which means thinking happy thoughts, or something like that, whether we are average utilitarians or total utilitarians or egoists.
SIA-type views are as far as I can tell incoherent, in the following sense: The population size of universes grows much faster than their probability can shrink. So if you want to say that their probability is proportional to their population size… how? (Flag: I notice I am confused about this part.) A more down-to-earth way of putting this problem is that the hypothesis in which there is one universe is dominated by the hypothesis in which there are 3^^^^3 copies of that universe in parallel dimensions, which in turn is dominated by the hypothesis in which there are 4^^^^^4...
SSA-type views are the only game in town, as far as I’m concerned—except for the “Let’s abandon probability entirely and just do decision theory” idea you favor. I’m not sure what to make of it yet. Anyhow, the big problem I see for SSA-type views is the one you mention about using the ability to create tons of copies of yourself to influence the world. That seems weird all right. I’d like to avoid that consequence if possible. But it doesn’t seem worse than weird to me yet. It doesn’t seem… un-biteable.
EDIT: I should add that I think your conclusion is probably right—I think your move away from probability and towards decision theory seems very promising. As we went updateless in decision theory, so too should we go updateless in probability. Something like that (I have to think & read about it more). I’m just objecting to the strong wording in your arguments to get there. :)