By the very nature of the topic, any contemporary examples cannot fail to be controversial. If “traditional” scientific rationality supports position X, then many or most scientists will support X, and the claim that they are wrong and the true position is the Bayes-supported Y is bound to be controversial.
So for non-controversial examples one would have to look to the history of science. For example, there must have been cases where a new theory was proposed that was much better than the current ones by Bayes, but which was not accepted by the scientific community until confirmed by experiments. Maybe general relativity?
Physicists love simplicity, so they are naturally Bayesian. Unfortunately, Nature is not, otherwise the cosmological constant would be zero, speed of light would be infinite, neutrino would be massless and the Standard Model of Particle Physics would be based on something like SU(5) instead of the SU(3)xSU(2)xU(1).
until general relativy was confirmed by experiments, who besides einstien had the necessary evidence? I’m not familiar with the case enough to really say how much of a difference there should have been.
To me Bayes is but one calculational tool, a way to build better models (i.e. those with higher predictive power), so I do not understand how Bayes can disagree with the traditional scientific method (not the strawmanned version EY likes to destroy). Again, I might be completely off, feel free to suggest what I missed.
Bayes is the well proven (to my knowledge) framework in which you should handle learning from evidence. All the other tools can be understood in how they derive from or contradict Bayes, like how engines can be understood in terms of thermodynamics.
If you let science define itself as rationality (exactly what works for epistemology), then there can be no conflict with Bayesian rationality, but I don’t think current (or traditional, ideal) science is constructed that way. Some elements of Eliezer’s straw science are definitely out there, and I’ve seen some of it first hand. On the other hand, I don’t know the science scene well enough to find good examples, which is why I asked.
Bayes is the well proven (to my knowledge) framework in which you should handle learning from evidence. All the other tools can be understood in how they derive from or contradict Bayes, like how engines can be understood in terms of thermodynamics.
Bayesian updating is a good thing to do when there is no conclusive evidence to discriminate between models and you must decide what to do next. It should be taught to scientists, engineers, economists, lawyers and programmers as the best tool available when deciding under uncertainty. I don’t see how it can be pushed any farther than that, into the realm of determining what is.
There are plenty of Bayesian examples this crowd can benefit from, such as “My code is misbehaving, what’s the best way to find the bug?”, but, unfortunately, EY does not seem to want to settle for a small fry like that.
Bayesian updating is a good thing to do when there is no conclusive evidence to discriminate between models and you must decide what to do next.
Likewise with conclusive evidence. Bayes is always right.
I don’t see how it can be pushed any farther than that, into the realm of determining what is.
I think I’ve confused you, sorry. I don’t mean to claim that Bayes implies or is able to support realism any better or worse than anything else. Bayes allocates anticipation between hypotheses. The what-is thing is orthogonal and (I’m coming to agree with you) probably useless.
2: If you are doing science and not, say, criminal law, at some point you have to get that conclusive evidence (or at least as conclusive as it gets, like the recent Higgs confirmation). Bayes is still probably, on average, the fastest way to get there, though.
Bayes is always right.
Feel free to unpack what you mean by right. Even your best Bayesian guess can turn out to be wrong.
So? It’s correct. Maybe you use some quick approximation, but it’s not like doing the right thing is inherently more costly.
If you are doing science and not, say, criminal law, at some point you have to get that conclusive evidence (or at least as conclusive as it gets, like the recent Higgs confirmation). Bayes is still probably, on average, the fastest way to get there, though.
This get-better-evidence thing would also be recommended by bayes+decision theory. (and if it wasn’t then it would have to defer to bayes+decision). Don’t see the relevence.
Feel free to unpack what you mean by right.
The right probability distribution is the one that maximizes the expected utility of an expected utility maximizer using that probability distribution. That’s missing a bunch of hairy stuff involving where to get the outer probability distribution, but I hope you get the point.
You can often get lucky by not using Bayesian updating. After all, that’s how science has been done for ages. What matters in the end is the superior explanatory and predictive power of the model, not how likely, simple or cute it is.
The right probability distribution is the one that maximizes the expected utility of an expected utility maximizer using that probability distribution.
So, on average, you make better decisions. I agree with that much. As I said, a nice useful tool. You can still lose even if you use it (“but I was doing everything right”—Bayesian’s famous last words), while someone who never heard of Bayes can win (and does, every 6⁄49 draw).
You can often get lucky by not using Bayesian updating. After all, that’s how science has been done for ages.
It’s “gotten lucky” exactly to the extent that it follows Bayes.
What matters in the end is the superior explanatory and predictive power of the model, not how simple or cute it is.
Yes. cuteness is overridden by evidence, but there is a definite trend in physics and elsewhere that the best models have often been quite cute in a certain sense, so we can use that cuteness as a proxy for “probably right”.
As I said, a nice useful tool. You can still lose even if you use it
Yes, a useful tool, but also the proven most-optimal and fully general tool. You can still lose, but any other system will cause you to still lose even more.
I think we are in agreement for the most part. I’m out.
Good point. I don’t disagree with that.
You’re a physicist, do you know of any better examples of issues where traditional science googles and bayes goggles disagree?
By the very nature of the topic, any contemporary examples cannot fail to be controversial. If “traditional” scientific rationality supports position X, then many or most scientists will support X, and the claim that they are wrong and the true position is the Bayes-supported Y is bound to be controversial.
So for non-controversial examples one would have to look to the history of science. For example, there must have been cases where a new theory was proposed that was much better than the current ones by Bayes, but which was not accepted by the scientific community until confirmed by experiments. Maybe general relativity?
Physicists love simplicity, so they are naturally Bayesian. Unfortunately, Nature is not, otherwise the cosmological constant would be zero, speed of light would be infinite, neutrino would be massless and the Standard Model of Particle Physics would be based on something like SU(5) instead of the SU(3)xSU(2)xU(1).
until general relativy was confirmed by experiments, who besides einstien had the necessary evidence? I’m not familiar with the case enough to really say how much of a difference there should have been.
To me Bayes is but one calculational tool, a way to build better models (i.e. those with higher predictive power), so I do not understand how Bayes can disagree with the traditional scientific method (not the strawmanned version EY likes to destroy). Again, I might be completely off, feel free to suggest what I missed.
Bayes is the well proven (to my knowledge) framework in which you should handle learning from evidence. All the other tools can be understood in how they derive from or contradict Bayes, like how engines can be understood in terms of thermodynamics.
If you let science define itself as rationality (exactly what works for epistemology), then there can be no conflict with Bayesian rationality, but I don’t think current (or traditional, ideal) science is constructed that way. Some elements of Eliezer’s straw science are definitely out there, and I’ve seen some of it first hand. On the other hand, I don’t know the science scene well enough to find good examples, which is why I asked.
Bayesian updating is a good thing to do when there is no conclusive evidence to discriminate between models and you must decide what to do next. It should be taught to scientists, engineers, economists, lawyers and programmers as the best tool available when deciding under uncertainty. I don’t see how it can be pushed any farther than that, into the realm of determining what is.
There are plenty of Bayesian examples this crowd can benefit from, such as “My code is misbehaving, what’s the best way to find the bug?”, but, unfortunately, EY does not seem to want to settle for a small fry like that.
Likewise with conclusive evidence. Bayes is always right.
I think I’ve confused you, sorry. I don’t mean to claim that Bayes implies or is able to support realism any better or worse than anything else. Bayes allocates anticipation between hypotheses. The what-is thing is orthogonal and (I’m coming to agree with you) probably useless.
1: It’s an overkill in this case.
2: If you are doing science and not, say, criminal law, at some point you have to get that conclusive evidence (or at least as conclusive as it gets, like the recent Higgs confirmation). Bayes is still probably, on average, the fastest way to get there, though.
Feel free to unpack what you mean by right. Even your best Bayesian guess can turn out to be wrong.
So? It’s correct. Maybe you use some quick approximation, but it’s not like doing the right thing is inherently more costly.
This get-better-evidence thing would also be recommended by bayes+decision theory. (and if it wasn’t then it would have to defer to bayes+decision). Don’t see the relevence.
The right probability distribution is the one that maximizes the expected utility of an expected utility maximizer using that probability distribution. That’s missing a bunch of hairy stuff involving where to get the outer probability distribution, but I hope you get the point.
You can often get lucky by not using Bayesian updating. After all, that’s how science has been done for ages. What matters in the end is the superior explanatory and predictive power of the model, not how likely, simple or cute it is.
So, on average, you make better decisions. I agree with that much. As I said, a nice useful tool. You can still lose even if you use it (“but I was doing everything right”—Bayesian’s famous last words), while someone who never heard of Bayes can win (and does, every 6⁄49 draw).
It’s “gotten lucky” exactly to the extent that it follows Bayes.
Yes. cuteness is overridden by evidence, but there is a definite trend in physics and elsewhere that the best models have often been quite cute in a certain sense, so we can use that cuteness as a proxy for “probably right”.
Yes, a useful tool, but also the proven most-optimal and fully general tool. You can still lose, but any other system will cause you to still lose even more.
I think we are in agreement for the most part. I’m out.
EDIT: also, you should come to more meetups.
Thursday is a bad day for me...