Subjective probabilities are inconsistent in any model which includes Peano arithmetic by straightforward application of Gödel’s incompleteness theorems, which is essentially any non-finite model.
Most people here seem to be extremely unwilling to admit that probabilities and uncertainty are not the same thing.
Subjective probabilities are inconsistent in any model which includes Peano arithmetic by straightforward application of Gödel’s incompleteness theorems, which is essentially any non-finite model.
That’s not Goedelian at all, it’s a variant of Russell’s paradox and can be excluded by an analogue of the theory of types (which would make Y an illegally self-referential probability assignment).
What if X(Y is consistent)=0.5? Then Y(2+2=5) = 0.5, and Y might or might not be inconsistent.
Another solution is of course to let X be incomplete, and refuse to assign X(Y is consistent). In fact, that would be the sensible thing to do. X can never be a function from ″all″ statements to probabilities, it’s domain should only include statements strictly smaller than X itself.
If Y(2 + 2 = 5) = 0.5, Y is still blatantly inconsistent, so that won’t help.
I think your second point might be right, though. Isn’t it the case that the language of first-order arithmetic is not powerful enough to refer to arbitrary probability assignments over its statements? After all, there are an uncountable number of such assignments, and only a countable number of well-formed formulas in the language. So I don’t see why a probability assignment X in a model that includes Peano arithmetic must also assign probabilities to statements like “Y is consistent”.
If you let X be incomplete like twanvl suggests, then you pretty much agree with my position of using probability as a useful tool, and disagree with Bayesian fundamentalism.
Getting into finer points of what is constructible or provable in what language is really not a kind of discussion we could usefully have within confines of lesswrong comment boxes, since we would need to start by formalizing everything far more than we normally do. And it wouldn’t really work, it is simply not possible to escape Goedel’s incompleteness theorem if you have something even slightly powerful and non-finite, it will get you one way or another.
I’m possibly being obtuse here, but I still don’t see the connection to the incompleteness theorem. I don’t deny that any consistent theory capable of expressing arithmetic must be incomplete, but what does that have to do with the argument you offered above? That argument doesn’t hinge on incompleteness, as far as I can see.
And it wouldn’t really work, it is simply not possible to escape Goedel’s incompleteness theorem if you have something even slightly powerful and non-finite, it will get you one way or another.
This is slightly exaggerated. The theory of real numbers is non-finite and quite powerful, but it has a complete axiomatization.
If you let X be incomplete like twanvl suggests, then you pretty much agree with my position of using probability as a useful tool, and disagree with Bayesian fundamentalism.
How is a distribution useful if it refuses to answer certain questions? I think I’m misunderstanding something you said, since I think that the essence of Bayesianism is the idea that probabilities must be used to make decisions, while you seem to be contrasting these two things.
Let X() be a consistent probability assignment (function from statement to probability number).
What does it mean for this function to be “consistent”? What kinds of statements do you allow?
Let Y() be a probability assignment …
What’s X(Y is consistent)?
If “probability assignment” is a mapping from statements (or Goedel numbers) to the real interval [0,1], it’s not a given that Y, being a “probability assignment”, is definable, so that you can refer to it in the statement “Y is consistent” above.
“Most people here seem to be extremely unwilling to admit that probabilities and uncertainty are not the same thing.”
I can’t speak for anyone else, but for my part that’s because I rarely if ever see the terms used consistently to describe different things. That may not be true of mathematicians, but very little of my language use is determined by mathematicians.
For example, given questions like: 1) When I say that the coin I’m about to flip has an equal chance of coming up heads or tails, am I making a statement about probability or uncertainty? 2) When I say that the coin I have just flipped, but haven’t yet looked at, has an equal chance of having come up heads or tails, am I making a statement about probability or uncertainty? 3) When I say that the coin I have just looked at has a much higher chance of having come up heads rather than tails, but you haven’t looked at the coin yet and you say at the same time that it has an equal chance of having come up heads or tails, are we both making a statement about the same thing, and if so which thing is it?
...I don’t expect consistent answers from 100 people in my linguistic environment. Rather I expect some people will answer “uncertainty” in all three cases, other people will answer “probability”, still others will give neither answer. Some might even say that I’m talking about “probability” in case 1, “uncertainty” in case 2, and that in case 3 I’m talking about uncertainty and you’re talking about probability.
In that kind of linguistic environment, it’s safest to treat the words as synonyms. If someone wants to talk to me about the difference between two kinds of systems in the world, the terms “probability” and “uncertainty” aren’t going to be very useful for doing so unless they first provide two definitions.
Subjective probabilities are inconsistent in any model which includes Peano arithmetic by straightforward application of Gödel’s incompleteness theorems.
Did you mean to say incomplete (eg, implying that some small class of bizarrely constructed theorems about subjective probability can’t be proven or disproven)?
Because the standard difficulties that Godel’s theorem introduces to Peano arithmetic wouldn’t render subjective probabilities inconsistent (eg, no theorems about subjective probability could be proven).
I don’t know if that actually solves the problem. Nor do I know if it makes sense to claim that understanding the two meanings of a Gödel statement, and the link between them, puts you in a different formal system which can therefore ‘prove’ the statement without contradiction. But it seems to me this accounts for what we humans actually do when we endorse the consistency of arithmetic and the linked mathematical statements. We don’t actually have the brains to write a full Gödel statement for our own brains and thereby produce a contradiction.
In your example below, X(Y is consistent) might in fact be 0.5 because understanding what both systems say might put us in Z. Again, this may or may not solve the underlying problem. But it shouldn’t destroy Bayesianism to admit that we learn from experience.
Subjective probability allows logical uncertainty.
Subjective probabilities are inconsistent in any model which includes Peano arithmetic by straightforward application of Gödel’s incompleteness theorems, which is essentially any non-finite model.
Most people here seem to be extremely unwilling to admit that probabilities and uncertainty are not the same thing.
Could you explain why this is true, please?
Let X() be a consistent probability assignment (function from statement to probability number).
Let Y() be a probability assignment including: Y(2+2=5) = X(Y is consistent), and otherwise Y(z)=X(z)
What’s X(Y is consistent)?
If X(Y is consistent)=1, then Y(2+2=5)=1, and Y is blatantly inconsistent, and so is X is inconsistent according to basic laws of mathematics.
If X(Y is consistent)=0, then Y(2+2=5)=0=X(2+2=5), and by definition X=Y, so X is inconsistent according to itself.
That’s not Goedelian at all, it’s a variant of Russell’s paradox and can be excluded by an analogue of the theory of types (which would make Y an illegally self-referential probability assignment).
What if X(Y is consistent)=0.5? Then Y(2+2=5) = 0.5, and Y might or might not be inconsistent.
Another solution is of course to let X be incomplete, and refuse to assign X(Y is consistent). In fact, that would be the sensible thing to do. X can never be a function from ″all″ statements to probabilities, it’s domain should only include statements strictly smaller than X itself.
If Y(2 + 2 = 5) = 0.5, Y is still blatantly inconsistent, so that won’t help.
I think your second point might be right, though. Isn’t it the case that the language of first-order arithmetic is not powerful enough to refer to arbitrary probability assignments over its statements? After all, there are an uncountable number of such assignments, and only a countable number of well-formed formulas in the language. So I don’t see why a probability assignment X in a model that includes Peano arithmetic must also assign probabilities to statements like “Y is consistent”.
If you let X be incomplete like twanvl suggests, then you pretty much agree with my position of using probability as a useful tool, and disagree with Bayesian fundamentalism.
Getting into finer points of what is constructible or provable in what language is really not a kind of discussion we could usefully have within confines of lesswrong comment boxes, since we would need to start by formalizing everything far more than we normally do. And it wouldn’t really work, it is simply not possible to escape Goedel’s incompleteness theorem if you have something even slightly powerful and non-finite, it will get you one way or another.
I’m possibly being obtuse here, but I still don’t see the connection to the incompleteness theorem. I don’t deny that any consistent theory capable of expressing arithmetic must be incomplete, but what does that have to do with the argument you offered above? That argument doesn’t hinge on incompleteness, as far as I can see.
This is slightly exaggerated. The theory of real numbers is non-finite and quite powerful, but it has a complete axiomatization.
How is a distribution useful if it refuses to answer certain questions? I think I’m misunderstanding something you said, since I think that the essence of Bayesianism is the idea that probabilities must be used to make decisions, while you seem to be contrasting these two things.
What does it mean for this function to be “consistent”? What kinds of statements do you allow?
If “probability assignment” is a mapping from statements (or Goedel numbers) to the real interval [0,1], it’s not a given that Y, being a “probability assignment”, is definable, so that you can refer to it in the statement “Y is consistent” above.
I can’t speak for anyone else, but for my part that’s because I rarely if ever see the terms used consistently to describe different things. That may not be true of mathematicians, but very little of my language use is determined by mathematicians.
For example, given questions like:
1) When I say that the coin I’m about to flip has an equal chance of coming up heads or tails, am I making a statement about probability or uncertainty?
2) When I say that the coin I have just flipped, but haven’t yet looked at, has an equal chance of having come up heads or tails, am I making a statement about probability or uncertainty?
3) When I say that the coin I have just looked at has a much higher chance of having come up heads rather than tails, but you haven’t looked at the coin yet and you say at the same time that it has an equal chance of having come up heads or tails, are we both making a statement about the same thing, and if so which thing is it?
...I don’t expect consistent answers from 100 people in my linguistic environment. Rather I expect some people will answer “uncertainty” in all three cases, other people will answer “probability”, still others will give neither answer. Some might even say that I’m talking about “probability” in case 1, “uncertainty” in case 2, and that in case 3 I’m talking about uncertainty and you’re talking about probability.
In that kind of linguistic environment, it’s safest to treat the words as synonyms. If someone wants to talk to me about the difference between two kinds of systems in the world, the terms “probability” and “uncertainty” aren’t going to be very useful for doing so unless they first provide two definitions.
Did you mean to say incomplete (eg, implying that some small class of bizarrely constructed theorems about subjective probability can’t be proven or disproven)?
Because the standard difficulties that Godel’s theorem introduces to Peano arithmetic wouldn’t render subjective probabilities inconsistent (eg, no theorems about subjective probability could be proven).
People tell me otherwise.
I don’t know if that actually solves the problem. Nor do I know if it makes sense to claim that understanding the two meanings of a Gödel statement, and the link between them, puts you in a different formal system which can therefore ‘prove’ the statement without contradiction. But it seems to me this accounts for what we humans actually do when we endorse the consistency of arithmetic and the linked mathematical statements. We don’t actually have the brains to write a full Gödel statement for our own brains and thereby produce a contradiction.
In your example below, X(Y is consistent) might in fact be 0.5 because understanding what both systems say might put us in Z. Again, this may or may not solve the underlying problem. But it shouldn’t destroy Bayesianism to admit that we learn from experience.