If according to a mathematical model an unconditional probability of a coin being Heads equals 1⁄3, it requires quite some suspension of disbelief to claim that this model accounts for a coin being fair. And if it requires to apply suspension of disbelief to justify that some model is describing the problem. It’s a pretty good hint that it actually doesn’t do it.
One of the points of this post is to provide opportunity to look at situations when a model clearly describes a problem: Elga’s Model and No-Coin-Toss problem, Lewis Model and Single-Awakening Problem, Updating Model and Observer problem, - and then compare them with awkward attempts to stretch these models to fit the setting of Sleeping Beauty, which require to constantly avert your eyes from all kind of weirdnesses and inconsistencies.
Math doesn’t have a formal way to prove when a model fits a problem, so in theory people can still cling to these models, regardless. But I hope we know better than this.
If according to a mathematical model an unconditional probability of a coin being Heads equals 1⁄3, it requires quite some suspension of disbelief to claim that this model accounts for a coin being fair. And if it requires to apply suspension of disbelief to justify that some model is describing the problem. It’s a pretty good hint that it actually doesn’t do it.
One of the points of this post is to provide opportunity to look at situations when a model clearly describes a problem: Elga’s Model and No-Coin-Toss problem, Lewis Model and Single-Awakening Problem, Updating Model and Observer problem, - and then compare them with awkward attempts to stretch these models to fit the setting of Sleeping Beauty, which require to constantly avert your eyes from all kind of weirdnesses and inconsistencies.
Math doesn’t have a formal way to prove when a model fits a problem, so in theory people can still cling to these models, regardless. But I hope we know better than this.