The central point of the first half or so of this post - that for E(X) = P(X)U(X) you could choose different P and U for the same E so bets can be decoupled from probabilities—is a good one.
I would put it this way: choices and consequences are in the territory*; probabilities and utilities are in the map.
Now, it could be that some probability/utility breakdowns are more sensible than others based on practical or aesthetic criteria, and in the next part of this post (“Utility Instability under Thirdism”) you make an argument against thirderism based on one such criterion.
However, your claim that Thirder Sleeping Beauty would bet differently before and after the coin toss is not correct. If Sleeping Beauty is asked before the coin toss to bet based on the same reward structure as after the toss she will bet the same way in each case—i.e. Thirder Sleeping Beauty will bet Thirder odds even before the experiment starts, if the coin toss being bet on is particularly the one in this experiment and the reward structure is such that she will be rewarded equally (as assessed by her utility function) for correctness in each awakening.
Now, maybe you find this dependence on what the coin will be used for counterintuitive, but that depends on your own particular taste.
Then, the “technicolor sleeping beauty” part seems to make assumptions where the reward structure is such that it only matters whether you bet or not in a particular universe and not how many times you bet. This is a very “Halfer” assumption on reward structure, even though you are accepting Thirder odds in this case! Also, Thirders can adapt to such a reward structure as well, and follow the same strategy.
Finally, on Rare Event Sleeping beauty, it seems to me that you are biting the bullet here to some extent to argue that this is not a reason to favour thirderism.
I think, we are fully justified to discard thirdism all together and simply move on, as we have resolved all the actual disagreements.
uh....no. But I do look forward to your next post anyway.
*edit: to be more correct, they’re less far up the map stack than probability and utilities. Making this clarification just in case someone might think from that statement that I believe in free will (I don’t).
Yeah, that was sloppy language, though I do like to think more in terms of bets than you do. One of my ways of thinking about these sorts of issues is in terms of “fair bets”—
each person thinks a bet with payoffs that align with their assumptions about utility is “fair”, and a bet with payoffs that align with different assumptions about utility is “unfair”.Edit: to be clear, a “fair” bet for a person is one where the payoffs are such that the betting odds where they break even matches the probabilities that that person would assign.OK, I was also being sloppy in the parts you are responding to.
Scenario 1: bet about a coin toss, nothing depending on the outcome (so payoff equal per coin toss outcome)
1:1
Scenario 2: bet about a Sleeping Beauty coin toss, payoff equal per awakening
2:1
Scenario 3: bet about a Sleeping Beauty coin toss, payoff equal per coin toss outcome
1:1
It doesn’t matter if it’s agreed to before or after the experiment, as long as the payoffs work out that way. Betting within the experiment is one way for the payoffs to more naturally line up on a per-awakening basis, but it’s only relevant (to bet choices) to the extent that it affects the payoffs.
Now, the conventional Thirder position (as I understand it) consistently applies equal utilities per awakening when considered from a position within the experiment.
I don’t actually know what the Thirder position is supposed to be from a standpoint from before the experiment, but I see no contradiction in assigning equal utilities per awakening from the before-experiment perspective as well.
As I see it, Thirders will only regret a bet (in the sense of considering it a bad choice to enter into ex ante given their current utilities) if you do some kind of bait and switch where you don’t make it clear what the payoffs were going to be up front.
Speculation; have you actually asked Thirders and Halfers to solve the problem? (while making clear the reward structure? - note that if you don’t make clear what the reward structure is, Thirders are more likely to misunderstand the question asked if, as in this case, the reward structure is “fair” from the Halfer perspective and “unfair” from the Thirder perspective).
A Halfer has to discount their utility based on how many of them there are, a Thirder doesn’t. It seems to me, on the contrary to your perspective, that Thirder utility is more stable.
… and I in my hasty reading and response I misread the conditions of the experiment (it’s a “Halfer” reward structure again). (As I’ve mentioned before in a comment on another of your posts, I think Sleeping Beauty is unusually ambiguous so both Halfer and Thirder perspectives are viable. But, I lean toward the general perspectives of Thirders on other problems (e.g. SIA seems much more sensible (edit: in most situations) to me than SSA), so Thirderism seems more intuitive to me).
Thirders can adapt to different reward structures but need to actually notice what the reward structure is!
the things mentioned in this comment chain. Which actually doesn’t feel like all that much, it feels like there’s maybe one or two differences in philosophical assumptions that are creating this disagreement (though maybe we aren’t getting at the key assumptions).
Edited to add: The criterion I mainly use to evaluate probability/utility splits is typical reward structure—you should assign probabilities/utilities such that a typical reward structure seems “fair”, so you don’t wind up having to adjust for different utilities when the rewards have the typical structure (you do have to adjust if the reward structure is atypical, and thus seems “unfair”).
This results in me agreeing with SIA in a lot of cases. An example of an exception is Boltzmann brains. A typical reward structure would give no reward for correctly believing that you are a Boltzmann brain. So you should always bet in realistic bets as if you aren’t a Boltzmann brain, and for this to be “fair”, I set P=0 instead of SIA’s U=0. I find people believing silly things about Boltzmann brains like taking it to be evidence against a theory if that theory proposes that there exists a lot of Boltzmann brains. I think more acceptance of the setting of P=0 instead of U=0 here would cut that nonsense off. To be clear, normal SIA does handle this case fine (that a theory predicting Boltzmann brains is not evidence against it), but setting P=0 would make it more obvious to people’s intuitions.
In the case of Sleeping Beauty, this is a highly artificial situation that has been pared down of context to the point that it’s ambiguous what would be a typical reward structure, which is why I consider it ambiguous.