There is no problem with theories that make probabilistic predictions. But getting a probabilistic prediction is not tantamount to assigning a probability to the theory that made the prediction.
True. But you seem to be assuming that a “theory” has to be a universal law of nature. You are too attached to physics. But in other sciences, you can have a theory which is quite explanatory, but is not in any sense a “law”, but rather it is an event. Examples:
the theory that the moon was formed by a collision between the earth and a Mars-sized planetesimal.
the theory that modern man originated in Africa within the past 200,000 years and that the Homo erectus population outside of Africa did not contribute to our ancestry.
the theory that Napolean was poisoned with arsenic in St. Helena.
the “aquatic ape theory”
the endosymbiotic theory of the origin of mitochondria
the theory that the Chinese discovered America in 1421.
Probabilities can be assigned to these theories.
And even for universal theories, you can talk about the relative odds of competing theories being correct—say between a supersymetric GUT based on E6 and one based on E8. (Notice, I said “talk about the odds”, not “calculate them”) And you can definitely calculate how much one particular experimental result shifts those odds.
As you pointed out earlier, we have two ostensibly different ways of investigating the theory that the Chinese discovered America in 1421: the Popperian way, in which this theory and alternatives to it are criticized. And the Bayesian way, in which those criticisms are broken down into atomic criticisms, and likelihood ratios are attached and multiplied.
I’ve seen plenty of rigorous Popperian discussions but not very many very rigorous—or even modestly rigorous—Bayesian discussions, even on this website. One piece of evidence for the China-discovered-America theory is some business about old Chinese maps. How does a Bayesian go about estimating the likelihood ratio P(China discovered America | old maps) / P(China discovered America | no old maps)?
I think you want to ask about P(maps|discover) / P(no maps|discover). Unless both wikipedia and my intuition are wrong.
Does catching you in this error relieve me of the responsibility of answering the question?
I hope so. Because I would want to instead argue using something like P(maps|discover) vs P(maps|not discover). That doesn’t take you all the way to P(discover), but it does at least give you a way to assess the evidential weight of the map evidence.
Here’s another personal example of Bayesianism in action. Do you have a sense of how much you updated by? P(Richard Dawkins praises Steven Pinker | EP is bunk)/ P(Richard Dawkins praises Steven Pinker | EP is not bunk) is .5? .999? Any idea?
P(“Sewing Machine” is a nym) = 1.0 P(Sewing Machine has been disingenuous) = 0.5 and rising P(Dawkins praises Pinker|EP is not bunk) is ill defined because P(EP is not bunk) = ~0 but I have updated P(Dawkins believes EP is not bunk) to at least 0.5
the theory that the moon was formed by a collision between the earth and a Mars-sized planetesimal.
The reason we would accept this theory as true is because it has survived criticism as an explanation while its rivals have not. If another rival theory is still in contention by also having survived criticism then there is a problem and this problem is not going to be resolved by computing, somehow, probabilities that the theories are true. To solve the problem you are going to have to come up with better criticisms or, possibly, alternative theories.
One difference between theories and events is that the counterfactuals of an event exist (in the multiverse). So it makes sense to talk about the probability of an event: the counterfactual events are real and occur to a greater or lesser measure in the multiverse. A false theory, by contrast, is simply false, it has no connection to reality. How do you assign anything other than an arbitrary probability to something that simply cannot be? Fortunately we don’t have to: we have Popperian epistemology.
A false theory, by contrast, is simply false, it has no connection to reality.
How do you assign anything other than an arbitrary probability to something
that simply cannot be?
Intrade knows that one! At the Higgs boson bet over 200 people think they know how to assign a probability to the issue.
1: By using the counterfactuals in the Tegmark Level IV multiverse.
2: By giving it a probability of 0. If T is falsified, that means P(D|T)=0 - we obtained data that T claims is impossible. In this case, Bayes’ theorem sets P(T|D)=0. Bayesianism includes all correct thinking tools, including Popperian epistemology.
But is P(D|T) really 0? We could have made a mistake and not recorded the correct data. Certainly scientists in the past have done so, and thought that they falsified theories that they didn’t falsify. In this case, P(D|T) is very small but nonzero, and so is P(T|D) (unless p(D|~T) is also very small.)
3: You cannot avoid giving a probability. Because of Cox’s theorem, which says we must use probability theory to reason about uncertainty (although I must confess that the assumption that we must use a single real number to reason is rather strong.)
Counterfactuals of the sort of theory that Perplexed describe do exist in a Quantum Multiverse; why not assign them probabilities?
Anyway, why would you want to make your entire epistemology contingent on a particular theory of physics? It sounds like the CR would collapse if Copenhagen turned out to be correct.
Yes, counterfactuals do exist in some of Perplexed examples. There are alternative universes where Earth does not have a moon, or it does have a moon but it was not formed by a collision with a planetesimal. However, in this universe, the one we are observing now, the moon either was or it was not formed in such a way. There is no middle ground. Finding evidence consistent with the theory does not make the theory truer or more likely—what the evidence does is supply us with criticisms of rival theories.
CR is not independent from physics, nor can it be. The laws of physics entail the existence of universal computers and of universal knowledge creators. As David Deutsch has shown, there are deep connections between multiversal quantum physics and the theory of information. If Cophenhagen should turn out to be true it would impact on many things and not the least of which would be CR.
There is no problem with theories that make probabilistic predictions. But getting a probabilistic prediction is not tantamount to assigning a probability to the theory that made the prediction.
True. But you seem to be assuming that a “theory” has to be a universal law of nature. You are too attached to physics. But in other sciences, you can have a theory which is quite explanatory, but is not in any sense a “law”, but rather it is an event. Examples:
the theory that the moon was formed by a collision between the earth and a Mars-sized planetesimal.
the theory that modern man originated in Africa within the past 200,000 years and that the Homo erectus population outside of Africa did not contribute to our ancestry.
the theory that Napolean was poisoned with arsenic in St. Helena.
the “aquatic ape theory”
the endosymbiotic theory of the origin of mitochondria
the theory that the Chinese discovered America in 1421.
Probabilities can be assigned to these theories.
And even for universal theories, you can talk about the relative odds of competing theories being correct—say between a supersymetric GUT based on E6 and one based on E8. (Notice, I said “talk about the odds”, not “calculate them”) And you can definitely calculate how much one particular experimental result shifts those odds.
As you pointed out earlier, we have two ostensibly different ways of investigating the theory that the Chinese discovered America in 1421: the Popperian way, in which this theory and alternatives to it are criticized. And the Bayesian way, in which those criticisms are broken down into atomic criticisms, and likelihood ratios are attached and multiplied.
I’ve seen plenty of rigorous Popperian discussions but not very many very rigorous—or even modestly rigorous—Bayesian discussions, even on this website. One piece of evidence for the China-discovered-America theory is some business about old Chinese maps. How does a Bayesian go about estimating the likelihood ratio P(China discovered America | old maps) / P(China discovered America | no old maps)?
I think you want to ask about P(maps|discover) / P(no maps|discover). Unless both wikipedia and my intuition are wrong.
Does catching you in this error relieve me of the responsibility of answering the question? I hope so. Because I would want to instead argue using something like P(maps|discover) vs P(maps|not discover). That doesn’t take you all the way to P(discover), but it does at least give you a way to assess the evidential weight of the map evidence.
Now P(Sewing-Machine is a phony) = ?
Here’s another personal example of Bayesianism in action. Do you have a sense of how much you updated by? P(Richard Dawkins praises Steven Pinker | EP is bunk)/ P(Richard Dawkins praises Steven Pinker | EP is not bunk) is .5? .999? Any idea?
P(“Sewing Machine” is a nym) = 1.0
P(Sewing Machine has been disingenuous) = 0.5 and rising
P(Dawkins praises Pinker|EP is not bunk) is ill defined because
P(EP is not bunk) = ~0
but I have updated P(Dawkins believes EP is not bunk) to at least 0.5
I don’t know what “disingenuous” means.
The reason we would accept this theory as true is because it has survived criticism as an explanation while its rivals have not. If another rival theory is still in contention by also having survived criticism then there is a problem and this problem is not going to be resolved by computing, somehow, probabilities that the theories are true. To solve the problem you are going to have to come up with better criticisms or, possibly, alternative theories.
One difference between theories and events is that the counterfactuals of an event exist (in the multiverse). So it makes sense to talk about the probability of an event: the counterfactual events are real and occur to a greater or lesser measure in the multiverse. A false theory, by contrast, is simply false, it has no connection to reality. How do you assign anything other than an arbitrary probability to something that simply cannot be? Fortunately we don’t have to: we have Popperian epistemology.
Intrade knows that one! At the Higgs boson bet over 200 people think they know how to assign a probability to the issue.
http://www.intrade.com/jsp/intrade/common/c_cd.jsp?conDetailID=622297&z=1224442713385
The bet is for an event:
Shrug.
1: By using the counterfactuals in the Tegmark Level IV multiverse.
2: By giving it a probability of 0. If T is falsified, that means P(D|T)=0 - we obtained data that T claims is impossible. In this case, Bayes’ theorem sets P(T|D)=0. Bayesianism includes all correct thinking tools, including Popperian epistemology.
But is P(D|T) really 0? We could have made a mistake and not recorded the correct data. Certainly scientists in the past have done so, and thought that they falsified theories that they didn’t falsify. In this case, P(D|T) is very small but nonzero, and so is P(T|D) (unless p(D|~T) is also very small.)
3: You cannot avoid giving a probability. Because of Cox’s theorem, which says we must use probability theory to reason about uncertainty (although I must confess that the assumption that we must use a single real number to reason is rather strong.)
Counterfactuals of the sort of theory that Perplexed describe do exist in a Quantum Multiverse; why not assign them probabilities?
Anyway, why would you want to make your entire epistemology contingent on a particular theory of physics? It sounds like the CR would collapse if Copenhagen turned out to be correct.
Yes, counterfactuals do exist in some of Perplexed examples. There are alternative universes where Earth does not have a moon, or it does have a moon but it was not formed by a collision with a planetesimal. However, in this universe, the one we are observing now, the moon either was or it was not formed in such a way. There is no middle ground. Finding evidence consistent with the theory does not make the theory truer or more likely—what the evidence does is supply us with criticisms of rival theories.
CR is not independent from physics, nor can it be. The laws of physics entail the existence of universal computers and of universal knowledge creators. As David Deutsch has shown, there are deep connections between multiversal quantum physics and the theory of information. If Cophenhagen should turn out to be true it would impact on many things and not the least of which would be CR.