I’d like to ask a moronic question or two that aren’t immediately obvious to me and probably should be. (Please note, my education is very limited, especially procedural knowledge of mathematics/probability.)
If I had to guess what the result of a coin flip would be, what confidence would I place in my guess? 50% because that’s the same as the probability or me being correct or 0% because I’m just randomly guessing between 2 outcomes and have no evidence to support either (well I guess there being only 2 outcomes is some kind of evidence)?
Likewise with a lottery. Would I place my confidence level (interval
? I don’t know the terminology) of winning at 0% or 1⁄6,000,000? Or some other number entirely?
If this is something I could easily have figured out with Google or Wikipedia, my apologies. Also if my question is incoherent or flawed please let me know.
Think of the probability you assign as a measure of how “not surprised” you would be at seeing a certain outcome.
Total probability of all mutually exclusive possibilities has to add up to 1, right?
So if you would be equally surprised at heads or tails coming up, and you consider all other possibilities to be negligible (Or you state your prediction in terms of “given that the coin lands such that one face is clearly the ‘face up’ face....”) then you ought assign a probability of 1⁄2 to each. (Again, slightly less to account for various “out of bounds” options, but in the abstract, considered on its own, 1⁄2)
ie, the same probability ought be assigned to each, since you’d be (reasonably) equally surprised at each outcome. So if the two have to also sum to 1 (100%), then 1⁄2 (50%) is the correct amount of belief to assign.
Ah, that makes a lot more sense: I was looking at the probability from the viewpoint of my guess (i.e. heads) instead of just looking at the all outcomes equally (no privileged references guesses), if you take my meaning. I also differentiated confidence in my prediction from the chance of my prediction being correct. How I managed to do that, I have no idea. Thanks for the reply.
Well, maybe you were thinking about “how confident am I that this is a fair coin vs that it’s biased toward heads vs that it’s biased toward tails” which is a slightly different question.
In the context of most discussions on this site, “confidence” is the probability that a guess is correct. For example:
I guess that a flipped coin will land heads. My confidence is 1⁄2, because I have arbitrarily picked 1 out of 2 possible outcomes.
I guess that, when a coin is flipped repeatedly, the ratio of heads will be close to half. My confidence is close to 1, because I know from experience that most coins are fair (and the law of large numbers).
“Confidence interval” is just confidence that something is within a certain range.
You should also be aware that in the context of frequentism (most scientific papers), these terms have different and somewhat confusing technical definitions.
You might want to look at Dempster-Shafer theory, which is a generalisation of Bayesian reasoning that distinguishes belief from probability. It is possible to have a belief of 0 in heads, 0 in tails, and 1 in {heads,tails}.
It may be that, when looked at properly, DS theory turns out to be Bayesian reasoning in disguise, but a brief google didn’t turn up anything definitive. Is anyone here more informed on the matter?
After looking at the reasoning in that article I was about to credit myself with being unintentionally deep, but I’m pretty sure that when I posed the question I was assuming a fair coin for the sake of the problem. Doh. Thanks for the interesting link.
(It’s really kind of embarrassing asking questions about simple probability amongst all the decision theories and Dutch books and priors and posteriors and inconceivably huge numbers. Only way to become less wrong, I suppose.)
I’d like to ask a moronic question or two that aren’t immediately obvious to me and probably should be. (Please note, my education is very limited, especially procedural knowledge of mathematics/probability.)
If I had to guess what the result of a coin flip would be, what confidence would I place in my guess? 50% because that’s the same as the probability or me being correct or 0% because I’m just randomly guessing between 2 outcomes and have no evidence to support either (well I guess there being only 2 outcomes is some kind of evidence)?
Likewise with a lottery. Would I place my confidence level (interval ? I don’t know the terminology) of winning at 0% or 1⁄6,000,000? Or some other number entirely?
If this is something I could easily have figured out with Google or Wikipedia, my apologies. Also if my question is incoherent or flawed please let me know.
Think of the probability you assign as a measure of how “not surprised” you would be at seeing a certain outcome.
Total probability of all mutually exclusive possibilities has to add up to 1, right?
So if you would be equally surprised at heads or tails coming up, and you consider all other possibilities to be negligible (Or you state your prediction in terms of “given that the coin lands such that one face is clearly the ‘face up’ face....”) then you ought assign a probability of 1⁄2 to each. (Again, slightly less to account for various “out of bounds” options, but in the abstract, considered on its own, 1⁄2)
ie, the same probability ought be assigned to each, since you’d be (reasonably) equally surprised at each outcome. So if the two have to also sum to 1 (100%), then 1⁄2 (50%) is the correct amount of belief to assign.
Surprise is not isomorphic to probability. See this.
Ah, that makes a lot more sense: I was looking at the probability from the viewpoint of my guess (i.e. heads) instead of just looking at the all outcomes equally (no privileged references guesses), if you take my meaning. I also differentiated confidence in my prediction from the chance of my prediction being correct. How I managed to do that, I have no idea. Thanks for the reply.
Well, maybe you were thinking about “how confident am I that this is a fair coin vs that it’s biased toward heads vs that it’s biased toward tails” which is a slightly different question.
Given how ‘confidence’ is used in a social context that differentiation would feel quite natural.
In the context of most discussions on this site, “confidence” is the probability that a guess is correct. For example:
I guess that a flipped coin will land heads. My confidence is 1⁄2, because I have arbitrarily picked 1 out of 2 possible outcomes.
I guess that, when a coin is flipped repeatedly, the ratio of heads will be close to half. My confidence is close to 1, because I know from experience that most coins are fair (and the law of large numbers).
“Confidence interval” is just confidence that something is within a certain range.
You should also be aware that in the context of frequentism (most scientific papers), these terms have different and somewhat confusing technical definitions.
You might want to look at Dempster-Shafer theory, which is a generalisation of Bayesian reasoning that distinguishes belief from probability. It is possible to have a belief of 0 in heads, 0 in tails, and 1 in {heads,tails}.
It may be that, when looked at properly, DS theory turns out to be Bayesian reasoning in disguise, but a brief google didn’t turn up anything definitive. Is anyone here more informed on the matter?
After looking at the reasoning in that article I was about to credit myself with being unintentionally deep, but I’m pretty sure that when I posed the question I was assuming a fair coin for the sake of the problem. Doh. Thanks for the interesting link.
(It’s really kind of embarrassing asking questions about simple probability amongst all the decision theories and Dutch books and priors and posteriors and inconceivably huge numbers. Only way to become less wrong, I suppose.)