One thing that keeps bothering me—while Bayesianism is the best epistemological theory we have, nobody is even remotely close to following Bayes rule in its entirety. It just isn’t usable for anything except the most trivial scenarios, and it’s not just a case of throwing more computational power at it, see Quine’s holism argument for scale of the problem.
Adding any heuristics on top of it, like all sorts of assumed independence, and it’s no longer Bayes rule, and you can as well abandon it completely and use something completely different.
I know it’s only vaguely related to the post, but it keeps bothering me that we pretend we’re Bayesians, while we’re not really in any meaningful sense.
Hm, I’m not sure what Quine’s holism argument does for your point. Quine says that only “science” as a whole can be tested, since you can keep making arbitrary assumptions to salvage any theory. A canonical example might be: They didn’t throw out Newton’s Law of Gravitation when Saturn (or whatever) had an irregular orbit; they assumed a massive object behind it—but why not reject the law of gravitation?
But far from being an example of the intractability of the Bayes Theorem, it’s actually an example of how Bayesianism can resolve such problems: you build a belief network that connects the theories to the predicted observations. As you encounter new data, you update based on a well-defined rule (based on priors and likelihood ratios), which can be equivalently expressed as Jaynes’s Maximum Entropy Method or the KL divergence minimization method.
This process allows you to determine whether a new observation requires you to believe you made an observation error, whether you should expect an additional observation (like the above case with Saturn), or whether you need to make fundamental revisions to the theory.
Of course, that’s still just approximating an ideal Bayesian, and conditional independence isn’t a required part of it, just a prior you can start from. But Quine’s holism argument doesn’t show any shortcoming of the use of Bayesian inference in science.
Holism means that for virtually every observation you have to update everything in your network, related or not. You cannot have nice networks with a bunch of small elegant compartments—it will be one huge mess.
My point is that actual Bayesianism is so insanely far beyond capabilities of any imaginable being, and Bayesianism with any assumed independence is no longer correct Bayesianism, that it’s not really proper to pretend we’re Bayesians in any practical sense.
Isn’t that like saying that Newton’s adherents weren’t “actual Newtonians” because they assumed away enough bodies to make their computations tractable?
One thing that keeps bothering me—while Bayesianism is the best epistemological theory we have, nobody is even remotely close to following Bayes rule in its entirety. It just isn’t usable for anything except the most trivial scenarios, and it’s not just a case of throwing more computational power at it, see Quine’s holism argument for scale of the problem.
Adding any heuristics on top of it, like all sorts of assumed independence, and it’s no longer Bayes rule, and you can as well abandon it completely and use something completely different.
I know it’s only vaguely related to the post, but it keeps bothering me that we pretend we’re Bayesians, while we’re not really in any meaningful sense.
Hm, I’m not sure what Quine’s holism argument does for your point. Quine says that only “science” as a whole can be tested, since you can keep making arbitrary assumptions to salvage any theory. A canonical example might be: They didn’t throw out Newton’s Law of Gravitation when Saturn (or whatever) had an irregular orbit; they assumed a massive object behind it—but why not reject the law of gravitation?
But far from being an example of the intractability of the Bayes Theorem, it’s actually an example of how Bayesianism can resolve such problems: you build a belief network that connects the theories to the predicted observations. As you encounter new data, you update based on a well-defined rule (based on priors and likelihood ratios), which can be equivalently expressed as Jaynes’s Maximum Entropy Method or the KL divergence minimization method.
This process allows you to determine whether a new observation requires you to believe you made an observation error, whether you should expect an additional observation (like the above case with Saturn), or whether you need to make fundamental revisions to the theory.
Of course, that’s still just approximating an ideal Bayesian, and conditional independence isn’t a required part of it, just a prior you can start from. But Quine’s holism argument doesn’t show any shortcoming of the use of Bayesian inference in science.
Holism means that for virtually every observation you have to update everything in your network, related or not. You cannot have nice networks with a bunch of small elegant compartments—it will be one huge mess.
My point is that actual Bayesianism is so insanely far beyond capabilities of any imaginable being, and Bayesianism with any assumed independence is no longer correct Bayesianism, that it’s not really proper to pretend we’re Bayesians in any practical sense.
Isn’t that like saying that Newton’s adherents weren’t “actual Newtonians” because they assumed away enough bodies to make their computations tractable?
Agreed. I changed it to “wannabe” for you :p