Ok, let me see if I’m understanding this correctly: if the experiment is checking the X-th digit specifically, you know that it must be a specific digit, but you don’t know which, so you can’t make a coherent model. So you generalize up to checking an arbitrary digit, where you know that the results are distributed evenly among {0...9}, so you can use this as your model.
Basically yes. Strictly speaking it’s not just any arbitrary digit, but any digit your knowledge about values of which works the same way as about value of X.
For any digit you can execute this algorithm:
Check whether you know about it more (or less) than you know about X.
Yes: Go to the next digit
No: Add it to the probability experiment
As a result you get a bunch of digits about values of which you knew as much as you know about X. And so you can use them to estimate your credence for X
The first part about not having a coherent model sounds a lot like the frequentist idea that you can’t generate a coherent probability for a coin of unknown bias—you know that it’s not 1⁄2 but you can’t decide on any specific value.
Yes. As I say in the post:
By the same logic tossing a coin is also deterministic, because if we toss the same coin exactly the same way in exactly the same conditions, the outcome is always the same. But that’s not how we reason about it. Just like we’ve generalized coin tossingprobability experiment from multiple individual coin tosses, we can generalize checking whether some previously unknown digit of pi is even or odd probability experiment from multiple individual checks about different unknown digits of pi.
The fact how a lot of Bayesians mock Frequentists for not being able to conceptualize probability of a coin of unknown fairness, and then make the exact same mistake with not being able to conceptualize probability of a specific digit of pi, which value is unknown, has always appeared quite ironic to me.
This seems equivalent to my definition of “information that would change your answer if it was different”, so it looks like we converged on similar ideas?
I think we did!
I’d argue that it’s physical uncertainty before the coin is flipped, but logical certainty after. After the flip, the coin’s state is unknown the same way the X-th digit of pi is unknown—the answer exists and all you need to do is look for it.
That’s not how people usually use these terms. The uncertainty about a state of the coin after the toss is describable within the framework of possible worlds just as uncertainty about a future coin toss, but uncertainty about a digit of pi—isn’t.
Moreover, isn’t it the same before the flip? It’s not that coin toss is “objectively random”. At the very least, the answer also exists in the future and all you need is to wait a bit for it to be revealed.
The core princinple is the same: there is in fact some value that Probability Experiment function takes in this iteration. But you don’t know which. You can do some actions: look under the box, do some computation, just wait for a couple of seconds—to learn the answer. But you also can reason approximately for the state of your current uncertainty before these actions are taken.
That’s not how people usually use these terms. The uncertainty about a state of the coin after the toss is describable within the framework of possible worlds just as uncertainty about a future coin toss, but uncertainty about a digit of pi—isn’t.
Oops, that’s my bad for not double-checking the definitions before I wrote that comment. I think the distinction I was getting at was more like known unknowns vs unknown unknowns, which isn’t relevant in platonic-ideal probability experiments like the ones we’re discussing here, but is useful in real-world situations where you can look for more information to improve your model.
Now that I’m cleared up on the definitions, I do agree that there doesn’t really seem to be a difference between physical and logical uncertainty.
Basically yes. Strictly speaking it’s not just any arbitrary digit, but any digit your knowledge about values of which works the same way as about value of X.
For any digit you can execute this algorithm:
Check whether you know about it more (or less) than you know about X.
Yes: Go to the next digit
No: Add it to the probability experiment
As a result you get a bunch of digits about values of which you knew as much as you know about X. And so you can use them to estimate your credence for X
Yes. As I say in the post:
The fact how a lot of Bayesians mock Frequentists for not being able to conceptualize probability of a coin of unknown fairness, and then make the exact same mistake with not being able to conceptualize probability of a specific digit of pi, which value is unknown, has always appeared quite ironic to me.
I think we did!
That’s not how people usually use these terms. The uncertainty about a state of the coin after the toss is describable within the framework of possible worlds just as uncertainty about a future coin toss, but uncertainty about a digit of pi—isn’t.
Moreover, isn’t it the same before the flip? It’s not that coin toss is “objectively random”. At the very least, the answer also exists in the future and all you need is to wait a bit for it to be revealed.
The core princinple is the same: there is in fact some value that Probability Experiment function takes in this iteration. But you don’t know which. You can do some actions: look under the box, do some computation, just wait for a couple of seconds—to learn the answer. But you also can reason approximately for the state of your current uncertainty before these actions are taken.
Oops, that’s my bad for not double-checking the definitions before I wrote that comment. I think the distinction I was getting at was more like known unknowns vs unknown unknowns, which isn’t relevant in platonic-ideal probability experiments like the ones we’re discussing here, but is useful in real-world situations where you can look for more information to improve your model.
Now that I’m cleared up on the definitions, I do agree that there doesn’t really seem to be a difference between physical and logical uncertainty.