Were it true, then the correct answer to “What is a probability of a fair coin toss, about which you know nothing else, to come Heads?”, would be: “Any real number between zero and one”.
Yes, you can make a valid probability space where the probability of Heads is any such number. What you can’t do, is say that probability space accurately represents the system.
Probability Theory only determines the requirements for such a space. It makes no mention of how you should satisfy them. And I assumed you would know the difference, and not try to misrepresent it.
I have no idea how you managed to come to the conclusion that I don’t understand that probability is about lack of knowledge.
Because of responses like that last one, and this:
It is not about what what can be shown, deterministically, to be true when we have perfect knowledge of a system. It is about the different outcomes that could be true when your knowledge is imperfect.
Makes little sense.
Okay, I misspoke. Because I also assumed you would at least try to understand. You need complete knowledge of an assumed (Baysean) or actual (Frequentist) system that can produce different results. But lack at least some knowledge of the one instance of the system in question; that is, one particular result.
The system in the SB problem is that it is a two-day experiment, where the individual (single) days are observed separately. There are four combinations, that depend on the separation, and the four are equally likely to apply to any single day in the system. An “instance” is not one experiment, it one day’s observation by SB. Because it is independent of any other instance, which includes a different day in the same experiment, due to the amnesia. The partial information that she gains is that it is not Tuesday, after Heads. What I keep trying to convince you of, is that that combination is part of the system and has a probability in the probability space.
And how is it different from the initial example with googolth digit of pi? Why can’t we say that we are not assigning non-zero probability to a false statement.
Because the system is “a digit in {0,1,2,3,4,5,6,7,8,9} is selected in a manner that is completely hidden from us at the moment, even if it could be obtained. The probability refers to what the result of calculating the nth digit could be when we do not apply n to the formula for pi, not when we do.
If what you are suggesting were true, then the correct answer to “What is a probability of a fair coin toss, about which you know nothing else, to come Heads?”, would be: “It is either 0 or 1, but we can’t tell.
And reason I “[concluded] that [you] don’t understand that probability is about lack of knowledge”, is because you are trying to incorporate what happens in an instance into the probability for system elements. It is because fail to recognize, in that last quote, that the probability is a property of what you could know about an instance but don’t due to ignorance, not what is true when you don’t know.
Yes, you can make a valid probability space where the probability of Heads is any such number. What you can’t do, is say that probability space accurately represents the system.
Probability Theory only determines the requirements for such a space. It makes no mention of how you should satisfy them. And I assumed you would know the difference, and not try to misrepresent it.
What kind of misrepresentation you are talking about? You have literally just claimed that there is no way to know whether a particular probabilistic model correctly represents the system.
Frankly, I have no idea what is going on with your views. There must be some kind of misunderstanding but to clear it up I’ve came up with the clearest demonstration of the absurdity of your position and you decided to bite the bullet anyway.
Do you really not see that it completely invalidates all epistemology? That if we can only say that a map is valid but never able to claim anything about its soundness towards a particular territory, then the map is useless at navigation?
Because of responses like that last one, and this:
Okay, I misspoke. Because I also assumed you would at least try to understand.
So you came to a wrong conclusion about my views because you misspoke? And you misspoke because you assumed that I will be trying to understand? Okay, look, I think we need to calm down a bit, make a deep breath and one step back. This conversation seems entangled in combatitiveness and it affects your ability to explain yourself clearly and my ability to understand what you are talking about and probably vice versa. I’m happy to assume that you mean something reasonable by the things you are saying and it’s just the issue of miscommunication, if you are ready to give the same courtesy to me. I would be glad to understand your position. I promise to give my best effort to do that. But you also need to make your best effort in explaining yourself. I will probably keep misunderstanding you in all kind of ways, anyway, not because of ill intent, but simply because of inferential distances and some unresolved cruxes of disagreement to which we can’t get until we understand each other.
You need complete knowledge of an assumed (Baysean) or actual (Frequentist) system that can produce different results. But lack at least some knowledge of the one instance of the system in question; that is, one particular result.
Okay I definitely agree with “lack at least some knowledge” part. If we had a perfect map, then there would be no point in approximating the territory with probabilistic models.
And that’s exactly my point. In both logical and empirical uncertainty we do lack some knowledge. Therefore we use the approximate probability theoretic model and get an approximate result. So what is the problem? Where is the difference between logical and empirical uncertainty, that so many people are struggle with?
The system in the SB problem
Please, put the poor Beauty to rest for now. It has nothing to do with the topic of discussion and using it as an example to explain your point is very counterproductive as it’s probably one of the most controversial example between the two of us. Take something more simple like a fair coin toss or a dice roll.
And again, I don’t see how you can first claim that
What you can’t do, is say that probability space accurately represents the system.
But then start talking about a probabilistic description of the system, which I, suppose, you assume to be accurate? Or are you just saying: this is a valid probability space that satisfies the axioms, regardless of any actual process that can happen in the real world. - This I can agree with. It’s just a very weak statement and you seem to behave as if you mean something much stronger.
Because the system is “a digit in {0,1,2,3,4,5,6,7,8,9} is selected in a manner that is completely hidden from us at the moment, even if it could be obtained. The probability refers to what the result of calculating the nth digit could be when we do not apply n to the formula for pi, not when we do.
Well yes, exactly! We don’t know what digit is the one that an algorithm would produce, and we don’t have any reason to expect any particular digit better than chance, therefore we are indifferent between all the digits so 1⁄10 for every of them. Just as if we were dealing with empirical uncertainty.
If what you are suggesting were true, then the correct answer to “What is a probability of a fair coin toss, about which you know nothing else, to come Heads?”, would be: “It is either 0 or 1, but we can’t tell.
What? No, absolutely not! It would be: we lack any reason to expect any outcome more than another, so we are indifferent between both outcomes so 1⁄2 for both of them. The exact same kind of reasoning for both logical and empirical uncertainty.
I’m starting to suspect that we are actually in complete agreement about logical uncertainty and you simply didn’t properly read my opening post and assumed that I have an opposite position to yours for some reason. Could you, please, outline your model of my views on the subject? What do you think my stance on logical uncertainty is?
What kind of misrepresentation you are talking about? You have literally just claimed that there is no way to know whether a particular probabilistic model correctly represents the system.
Since what I said was that probability theory makes no restrictions on how to assign values, but that you have to make assignments that are reasonable based on things like the Principle of Indifference, this would be an example of misrepresentation.
If what you are suggesting were true, then the correct answer to “What is a probability of a fair coin toss, about which you know nothing else, to come Heads?”, would be: “It is either 0 or 1, but we can’t tell.
What? No, absolutely not! It would be: we lack any reason to expect any outcome more than another, so we are indifferent between both outcomes so 1⁄2 for both of them. The exact same kind of reasoning for both logical and empirical uncertainty.
You claim that “the probability that the nth digit of pi, in base 10, is 1⁄10 for each possible digit,” is assigning a non-zero probability to nine false statements, and probability that is less than 1 to one true statement. I am saying that it such probabilities apply only if we do not apply the formula that will determine that digit.
I claim that the equivalent statement, when I flip a coin but place my hand over it before you see the result, is “the probability that the coin landed Heads is 1⁄2, as is the probability that it landed Tails.” And that the truth of these two statements is just as deterministic, if I lift my hand. Or if I looked before I hid the coin. Or if someone has a x-ray that can see it. That the probability in question is about the uncertainty when no method is applied that determine the truth of the statements, even when we know such methods exist.
I’m saying that this is not a question in epistemology, not that epistemology is invalid.
And the reason the SB problem is pertinent, is because it does not matter if H+Tue can be observed. It is one of the outcomes that is possible.
Since what I said was that probability theory makes no restrictions on how to assign values, but that you have to make assignments that are reasonable based on things like the Principle of Indifference
That’s not what you’ve said at all. Re-read our comment chain. There is not a single mention of principle of indifference by you there. I’ve specifically brought up an example of a fair coin about which we know nothing else and yet you agreed that any number from 0 to 1 is the right answer to this problem. The first time you’ve mentioned principle of indifference under this post is in an answer to another person, which happened after my alleged misrepresentation of yours.
Now, I’m happy to assume that you’ve meant this all the time and just was unable to communicate your views properly. Please do better next time. But for now let me try to outline your views as far as I understand them, which do appear less ridiculous now:
You believe that probability theory only deals with whether something is a probability space at all—does it fit the axioms or not. And the question which valid probability space is applicable to a given problem based on some knowledge level, doesn’t have a “true” answer. Instead there is a completely separate category of “reasonableness” and some probability spaces are reasonable to a given problem based on the principle of uncertainty, but this is a completely separate domain from probability theory.
Please correct me, what I got wrong and then I hope we will be able to move forward.
You claim that “the probability that the nth digit of pi, in base 10, is 1⁄10 for each possible digit,” is assigning a non-zero probability to nine false statements, and probability that is less than 1 to one true statement.
No, I don’t. I’m ready to entertain this framework, but then I don’t see any difference compared to an example with empirical uncertainty. Do you see this difference? Do you think there is some fundamental reason why we can say that probability of a coin to be Heads is 1⁄2 and yet we can’t say the same about the probability of a particular unknown to us digit of pi to be even?
I am saying that it such probabilities apply only if we do not apply the formula that will determine that digit.
You mean, before we apply the formula, right? Suppose we are going to apply the formula in a moment, we can still say that before we applied it the probability for any digit is 1⁄10, can we? And after we applied the formula we know which digit it is so it’s 1 for this particular digit and 0 for all the rest.
I claim that the equivalent statement, when I flip a coin but place my hand over it before you see the result, is “the probability that the coin landed Heads is 1⁄2, as is the probability that it landed Tails.” And that the truth of these two statements is just as deterministic, if I lift my hand. Or if I looked before I hid the coin. Or if someone has a x-ray that can see it. That the probability in question is about the uncertainty when no method is applied that determine the truth of the statements, even when we know such methods exist.
No disagreement here. Do you agree that the scenarios with logical and empirical uncertainty work exactly the same way?
I’m saying that this is not a question in epistemology
If it’s neither the question of probability theory nor epistemology, what is it a question of? How long are you going to pass the buck of it before engaging with this question? And what is the answer?
Yes, you can make a valid probability space where the probability of Heads is any such number. What you can’t do, is say that probability space accurately represents the system.
Probability Theory only determines the requirements for such a space. It makes no mention of how you should satisfy them. And I assumed you would know the difference, and not try to misrepresent it.
Because of responses like that last one, and this:
Okay, I misspoke. Because I also assumed you would at least try to understand. You need complete knowledge of an assumed (Baysean) or actual (Frequentist) system that can produce different results. But lack at least some knowledge of the one instance of the system in question; that is, one particular result.
The system in the SB problem is that it is a two-day experiment, where the individual (single) days are observed separately. There are four combinations, that depend on the separation, and the four are equally likely to apply to any single day in the system. An “instance” is not one experiment, it one day’s observation by SB. Because it is independent of any other instance, which includes a different day in the same experiment, due to the amnesia. The partial information that she gains is that it is not Tuesday, after Heads. What I keep trying to convince you of, is that that combination is part of the system and has a probability in the probability space.
Because the system is “a digit in {0,1,2,3,4,5,6,7,8,9} is selected in a manner that is completely hidden from us at the moment, even if it could be obtained. The probability refers to what the result of calculating the nth digit could be when we do not apply n to the formula for pi, not when we do.
If what you are suggesting were true, then the correct answer to “What is a probability of a fair coin toss, about which you know nothing else, to come Heads?”, would be: “It is either 0 or 1, but we can’t tell.
And reason I “[concluded] that [you] don’t understand that probability is about lack of knowledge”, is because you are trying to incorporate what happens in an instance into the probability for system elements. It is because fail to recognize, in that last quote, that the probability is a property of what you could know about an instance but don’t due to ignorance, not what is true when you don’t know.
What kind of misrepresentation you are talking about? You have literally just claimed that there is no way to know whether a particular probabilistic model correctly represents the system.
Frankly, I have no idea what is going on with your views. There must be some kind of misunderstanding but to clear it up I’ve came up with the clearest demonstration of the absurdity of your position and you decided to bite the bullet anyway.
Do you really not see that it completely invalidates all epistemology? That if we can only say that a map is valid but never able to claim anything about its soundness towards a particular territory, then the map is useless at navigation?
So you came to a wrong conclusion about my views because you misspoke? And you misspoke because you assumed that I will be trying to understand? Okay, look, I think we need to calm down a bit, make a deep breath and one step back. This conversation seems entangled in combatitiveness and it affects your ability to explain yourself clearly and my ability to understand what you are talking about and probably vice versa. I’m happy to assume that you mean something reasonable by the things you are saying and it’s just the issue of miscommunication, if you are ready to give the same courtesy to me. I would be glad to understand your position. I promise to give my best effort to do that. But you also need to make your best effort in explaining yourself. I will probably keep misunderstanding you in all kind of ways, anyway, not because of ill intent, but simply because of inferential distances and some unresolved cruxes of disagreement to which we can’t get until we understand each other.
Okay I definitely agree with “lack at least some knowledge” part. If we had a perfect map, then there would be no point in approximating the territory with probabilistic models.
And that’s exactly my point. In both logical and empirical uncertainty we do lack some knowledge. Therefore we use the approximate probability theoretic model and get an approximate result. So what is the problem? Where is the difference between logical and empirical uncertainty, that so many people are struggle with?
Please, put the poor Beauty to rest for now. It has nothing to do with the topic of discussion and using it as an example to explain your point is very counterproductive as it’s probably one of the most controversial example between the two of us. Take something more simple like a fair coin toss or a dice roll.
And again, I don’t see how you can first claim that
But then start talking about a probabilistic description of the system, which I, suppose, you assume to be accurate? Or are you just saying: this is a valid probability space that satisfies the axioms, regardless of any actual process that can happen in the real world. - This I can agree with. It’s just a very weak statement and you seem to behave as if you mean something much stronger.
Well yes, exactly! We don’t know what digit is the one that an algorithm would produce, and we don’t have any reason to expect any particular digit better than chance, therefore we are indifferent between all the digits so 1⁄10 for every of them. Just as if we were dealing with empirical uncertainty.
What? No, absolutely not! It would be: we lack any reason to expect any outcome more than another, so we are indifferent between both outcomes so 1⁄2 for both of them. The exact same kind of reasoning for both logical and empirical uncertainty.
I’m starting to suspect that we are actually in complete agreement about logical uncertainty and you simply didn’t properly read my opening post and assumed that I have an opposite position to yours for some reason. Could you, please, outline your model of my views on the subject? What do you think my stance on logical uncertainty is?
Since what I said was that probability theory makes no restrictions on how to assign values, but that you have to make assignments that are reasonable based on things like the Principle of Indifference, this would be an example of misrepresentation.
You claim that “the probability that the nth digit of pi, in base 10, is 1⁄10 for each possible digit,” is assigning a non-zero probability to nine false statements, and probability that is less than 1 to one true statement. I am saying that it such probabilities apply only if we do not apply the formula that will determine that digit.
I claim that the equivalent statement, when I flip a coin but place my hand over it before you see the result, is “the probability that the coin landed Heads is 1⁄2, as is the probability that it landed Tails.” And that the truth of these two statements is just as deterministic, if I lift my hand. Or if I looked before I hid the coin. Or if someone has a x-ray that can see it. That the probability in question is about the uncertainty when no method is applied that determine the truth of the statements, even when we know such methods exist.
I’m saying that this is not a question in epistemology, not that epistemology is invalid.
And the reason the SB problem is pertinent, is because it does not matter if H+Tue can be observed. It is one of the outcomes that is possible.
That’s not what you’ve said at all. Re-read our comment chain. There is not a single mention of principle of indifference by you there. I’ve specifically brought up an example of a fair coin about which we know nothing else and yet you agreed that any number from 0 to 1 is the right answer to this problem. The first time you’ve mentioned principle of indifference under this post is in an answer to another person, which happened after my alleged misrepresentation of yours.
Now, I’m happy to assume that you’ve meant this all the time and just was unable to communicate your views properly. Please do better next time. But for now let me try to outline your views as far as I understand them, which do appear less ridiculous now:
You believe that probability theory only deals with whether something is a probability space at all—does it fit the axioms or not. And the question which valid probability space is applicable to a given problem based on some knowledge level, doesn’t have a “true” answer. Instead there is a completely separate category of “reasonableness” and some probability spaces are reasonable to a given problem based on the principle of uncertainty, but this is a completely separate domain from probability theory.
Please correct me, what I got wrong and then I hope we will be able to move forward.
No, I don’t. I’m ready to entertain this framework, but then I don’t see any difference compared to an example with empirical uncertainty. Do you see this difference? Do you think there is some fundamental reason why we can say that probability of a coin to be Heads is 1⁄2 and yet we can’t say the same about the probability of a particular unknown to us digit of pi to be even?
You mean, before we apply the formula, right? Suppose we are going to apply the formula in a moment, we can still say that before we applied it the probability for any digit is 1⁄10, can we? And after we applied the formula we know which digit it is so it’s 1 for this particular digit and 0 for all the rest.
No disagreement here. Do you agree that the scenarios with logical and empirical uncertainty work exactly the same way?
If it’s neither the question of probability theory nor epistemology, what is it a question of? How long are you going to pass the buck of it before engaging with this question? And what is the answer?