I can see how a computer could simulate any anthropic reasoner’s thought process. But if you ran the sleeping beauty problem as a computer simulation (i.e. implemented the illusionist paradigm) aren’t the Halfers going to be winning on average?
Imagine the problem as a genetic algorithm with one parameter, the credence. Wouldn’t the whole population converge to 0.5?
I think the solution to the sleeping beauty problem depends on how exactly the bets are evaluated. The entire idea is that in one branch you make a bet once, but on the other branch you make a bet twice. Does it mean that if you make a correct guess in the latter branch, you win twice as much money? Or despite making (the same) bet twice, you only get the money once?
Depending on the answer, the optimal bet probability is either 1⁄2 or 1⁄3.
A computer with no first-person experience can still do anthropic reasoning. They don’t really interact with each other.
I can see how a computer could simulate any anthropic reasoner’s thought process. But if you ran the sleeping beauty problem as a computer simulation (i.e. implemented the illusionist paradigm) aren’t the Halfers going to be winning on average?
Imagine the problem as a genetic algorithm with one parameter, the credence. Wouldn’t the whole population converge to 0.5?
I think the solution to the sleeping beauty problem depends on how exactly the bets are evaluated. The entire idea is that in one branch you make a bet once, but on the other branch you make a bet twice. Does it mean that if you make a correct guess in the latter branch, you win twice as much money? Or despite making (the same) bet twice, you only get the money once?
Depending on the answer, the optimal bet probability is either 1⁄2 or 1⁄3.
You’re right. I’m updating towards illusionism being orthogonal to anthropics in terms of betting behavior, though the upshot is still obscure to me.