Cox’s theorem is a proof of Bayes rule, from the conditions above. “Consistency” in t his context means (Jaynes 19): If a conclusion can be reasoned out in more than one way, then every possible way must lead to the same result; we always take into account all of the evidence we have relevant to a question; and we always represent equivalent states of knowledge by equivalent plausibility assignments. By “reason in more than one way”, we specifically mean adding the same pieces of evidence in different orders.
(Edit: It’s page 114 in the PDF you linked. That seems to be the same text as my printed copy, but with the numbering starting in a different place for some reason.)
Assigning degrees of plausibility to theories is an attempt to justify them. Cox’s theorem just assumes you can do this. Popper argued that justification, including probabilistic justification, is impossible. How does just assuming something that Popper refuted show anything?
At some point you may be called on to base a decision on whether something is true or false. The simplest of these decisions can be reduced to betting for or against something, and you cannot always choose not to bet. There must be some odds at which you switch from betting on falsity to betting on truth, and those can be taken to demonstrate your plausibility assignment.
How does betting on the truth of a universal theory work? I can’t see a bookie ever paying out on that, although it would be good business to get punters to take such bets.
The usual way on Less Wrong is to bring in Omega, the all powerful all knowing entity who spends his free time playing games with us mortals, and for some reason most of his games illustrate some point of probability or decision theory. With Omega acting as the bookie you can be forced to assign a probability to any meaningful statement. Some people just respond to such scenarios by asserting that Omega is impossible, I don’t know if you’re one of those people but I’ll try a different approach anyway.
Imagine that in 2050 the physicists have narrowed down all their candidates for a Theory of Everything to just two possibilities, creatively named X-theory and Y-theory.
An engineer who is a passionate supporter of X-theory has designed and built a new power plant. If X-theory is correct, his power plant will produce a limitless supply of free energy and bring humanity into a post-scarcity era.
However, a number of physicists have had a look at his designs, and have shown that if Y-theory is correct his power plant will create a black hole and wipe out humanity as soon as it is turned on. Somehow, it has ended up being your decision whether or not it goes on.
This is one such ‘bet’, it may not a very likely scenario but you should still be able to handle it. If we combine it with many slightly altered dilemma we can figure out your probability estimate of theory X being correct, whether you admit to having one or not.
You’ve presented this as a scenario in which you have to make a choice between two conflicting theories. But the problem you face isn’t should I choose X or should I choose Y; the problem you face is that given you have this conflict, what should I do now. This problem is objective, it is different to the problem of whether X is right or Y is right, and it is solvable. Given that this is the year 2050 and humanity won’t in fact be wanting, the best solution to the problem may be to wait, pending further research to resolve the conflict. This isn’t an implicit bet against X and for Y, it is a solution to a different problem to the problems X and Y address.
For sake of argument we say that the plant requires a rare and unstable isotope to get started. Earth’s entire supply is contained in the plant and will decay in 24 hours.
I could also ask you a similar dilemma, but this time there is only one theory which acknowledges that whether the plant works or creates a black hole depends on a single quantum event, which has a 50% chance of going either way. What do you do? If you wouldn’t launch I can ask the same question but now there’s only a 25% chance of a black hole, and so on until I learn the ratio of the utility values that you assign to “post scarcity future” and “extinction of humanity”. This might for example tell me that the chance of a black hole has to be less than 30% for you to press the button.
Then I ask you the original dilemma, and learn whether the probability you assign to theory X is above or below 70%. If I have far too much time on my hands I can keep modifying the dilemma with slightly altered pay-offs until I pinpoint your estimate.
This avoids the question. If it helps, try to construct a version of this in the least convenient possible world. For example, one obvious thing to do would be that something about theory X means the plant can only be turned on at a certain celestial conjunction (yes, this is silly but it gets the point across. That’s why it is a least convenient world) and otherwise would need to wait a thousand years.
One can vary the situation. For example, it might be that under theory X, medicine A will save a terminally ill cancer patient, and under theory Y, medicine B will save them. And A and B together will kill the patient according to both theories.
Having the prediction turn out doesn’t make the theory true or more likely, it is just consistent evidence. There are an infinitude of other theories that the same evidence is consistent with.
To give a simple example, consider flipping a coin. You observe HHH. Is this a fair coin? or a double-headed one? or a biased coin? Different theories describe these situations, and you could be asked to bet on them. Imagine you then further observe HHHH—making a total of HHHHHHH. This makes your estimate of the chances of the “double-headed coin” hypothesis go up. Other hypotheses may increase in probability too—but we are not troubled by there being an infinity of them, since we give extra weight to the simpler ones, using Occam’s razor.
Nitpicking here, grue and bleen aren’t statements and thus can’t be assigned probabilites. “This object is grue” and “this object is bleen” are statements.
Assuming that the object in question is an emerald, then grue is in conflict with our best explanations about emeralds whereas there are no known problems with the idea that the emerald is green. So I go with green, but not because I have assigned degrees of plausibility but because I see no problem with green.
Cox’s theorem is a proof of Bayes rule, from the conditions above. “Consistency” in t his context means (Jaynes 19): If a conclusion can be reasoned out in more than one way, then every possible way must lead to the same result; we always take into account all of the evidence we have relevant to a question; and we always represent equivalent states of knowledge by equivalent plausibility assignments. By “reason in more than one way”, we specifically mean adding the same pieces of evidence in different orders.
(Edit: It’s page 114 in the PDF you linked. That seems to be the same text as my printed copy, but with the numbering starting in a different place for some reason.)
Assigning degrees of plausibility to theories is an attempt to justify them. Cox’s theorem just assumes you can do this. Popper argued that justification, including probabilistic justification, is impossible. How does just assuming something that Popper refuted show anything?
One argument for plausibility would be this.
At some point you may be called on to base a decision on whether something is true or false. The simplest of these decisions can be reduced to betting for or against something, and you cannot always choose not to bet. There must be some odds at which you switch from betting on falsity to betting on truth, and those can be taken to demonstrate your plausibility assignment.
How does betting on the truth of a universal theory work? I can’t see a bookie ever paying out on that, although it would be good business to get punters to take such bets.
The usual way on Less Wrong is to bring in Omega, the all powerful all knowing entity who spends his free time playing games with us mortals, and for some reason most of his games illustrate some point of probability or decision theory. With Omega acting as the bookie you can be forced to assign a probability to any meaningful statement. Some people just respond to such scenarios by asserting that Omega is impossible, I don’t know if you’re one of those people but I’ll try a different approach anyway.
Imagine that in 2050 the physicists have narrowed down all their candidates for a Theory of Everything to just two possibilities, creatively named X-theory and Y-theory.
An engineer who is a passionate supporter of X-theory has designed and built a new power plant. If X-theory is correct, his power plant will produce a limitless supply of free energy and bring humanity into a post-scarcity era.
However, a number of physicists have had a look at his designs, and have shown that if Y-theory is correct his power plant will create a black hole and wipe out humanity as soon as it is turned on. Somehow, it has ended up being your decision whether or not it goes on.
This is one such ‘bet’, it may not a very likely scenario but you should still be able to handle it. If we combine it with many slightly altered dilemma we can figure out your probability estimate of theory X being correct, whether you admit to having one or not.
You’ve presented this as a scenario in which you have to make a choice between two conflicting theories. But the problem you face isn’t should I choose X or should I choose Y; the problem you face is that given you have this conflict, what should I do now. This problem is objective, it is different to the problem of whether X is right or Y is right, and it is solvable. Given that this is the year 2050 and humanity won’t in fact be wanting, the best solution to the problem may be to wait, pending further research to resolve the conflict. This isn’t an implicit bet against X and for Y, it is a solution to a different problem to the problems X and Y address.
For sake of argument we say that the plant requires a rare and unstable isotope to get started. Earth’s entire supply is contained in the plant and will decay in 24 hours.
I could also ask you a similar dilemma, but this time there is only one theory which acknowledges that whether the plant works or creates a black hole depends on a single quantum event, which has a 50% chance of going either way. What do you do? If you wouldn’t launch I can ask the same question but now there’s only a 25% chance of a black hole, and so on until I learn the ratio of the utility values that you assign to “post scarcity future” and “extinction of humanity”. This might for example tell me that the chance of a black hole has to be less than 30% for you to press the button.
Then I ask you the original dilemma, and learn whether the probability you assign to theory X is above or below 70%. If I have far too much time on my hands I can keep modifying the dilemma with slightly altered pay-offs until I pinpoint your estimate.
I suppose you get that when the container containing the black dye explodes....
Damn, I made that mistake every single time I typed it and I thought I’d corrected them all.
This avoids the question. If it helps, try to construct a version of this in the least convenient possible world. For example, one obvious thing to do would be that something about theory X means the plant can only be turned on at a certain celestial conjunction (yes, this is silly but it gets the point across. That’s why it is a least convenient world) and otherwise would need to wait a thousand years.
One can vary the situation. For example, it might be that under theory X, medicine A will save a terminally ill cancer patient, and under theory Y, medicine B will save them. And A and B together will kill the patient according to both theories.
So: just bet on things the theory predicts instead.
Having the prediction turn out doesn’t make the theory true or more likely, it is just consistent evidence. There are an infinitude of other theories that the same evidence is consistent with.
To give a simple example, consider flipping a coin. You observe HHH. Is this a fair coin? or a double-headed one? or a biased coin? Different theories describe these situations, and you could be asked to bet on them. Imagine you then further observe HHHH—making a total of HHHHHHH. This makes your estimate of the chances of the “double-headed coin” hypothesis go up. Other hypotheses may increase in probability too—but we are not troubled by there being an infinity of them, since we give extra weight to the simpler ones, using Occam’s razor.
Do you think that grue and bleen are as plausible as blue and green? Would you like to bet?
Nitpicking here, grue and bleen aren’t statements and thus can’t be assigned probabilites. “This object is grue” and “this object is bleen” are statements.
Yes, I left making up more specific examples as an exercise for the reader.
Assuming that the object in question is an emerald, then grue is in conflict with our best explanations about emeralds whereas there are no known problems with the idea that the emerald is green. So I go with green, but not because I have assigned degrees of plausibility but because I see no problem with green.