Probabilities go up or down, but there is no magic threshold beyond which they change qualitatively into “knowledge”.
Matter of fact there are thresholds below which extra processing cost does not pay off (or, in case of human head, when it is extremely implausible that the processing is even going to be performed correctly).
Probabilistic reasoning is, in general, computationally expensive. In many situations, a very large number of combinations of uncertain parameters has to be processed, the cross correlations must be accounted for, et cetera.
Actions conditioned on evidence generally have higher expected utility than those not conditioned on evidence, and processing of beliefs conditioned on evidence likewise so.
The expected utility sums for things such as expenditure of resources have a term for resources being kept for the future uses which may be more conditional on evidence, and the actions that are less evidence conditioned than usual ought to lose out to the bulk of possible ways one may act in the future (edit: the ways which you can’t explicitly enumerate).
That’s just some of the thresholds that an optimally programmed intelligence (on a physically plausible computer) would apply.
This is sort of like going on about what Maxwell’s equations taught you about painting. Maxwell’s equations are quite far from painting, about as far in terms of inference length as Bayes theorem is from most actual decision making or belief forming. edit: make that much further, actually, considering that there’s no AI.
Let’s make an example to clarify. There is Bob. Bob being rich would be evidence for him having a good job. Bob having a good job would be evidence for him being rich. Both of those would be evidence with regards to Bob’s education, and so on and so forth. Everything cross correlates to everything else, the belief propagation is NP complete, the algorithms for computing it are very nontrivial, and various subtle implementation errors would make everything converge on a completely wrong value.
Strengths of relevant relations between beliefs about Bob are themselves beliefs, so the graph is pretty damn huge. When known probabilities get fairly close to 0 and 1, it is tractable whenever unknowns are close to 0 or 1. But when they’re closer to the middle, you’re dealing with a very complicated relation. And if you could compute the resulting equation in your head, well, there’s a lot of lesser engineering tasks that you should absolutely breeze through.
edit: expanded a bit. Really, we do know how to progressively approximate from quantum electrodynamics to geometric optics to drawing 3d shapes to painting, but we do not know how to get from Bayes theorem to a full blown AI on physically plausible hardware.
I agree with your points about the value of information. Indeed, as Vaniver said, Bayesianism (i.e., “qualitative Bayes”), together with the idea of expected-utility maximization, makes the importance of VoI especially salient and easy to understand. So I’m a little puzzled by your conclusion that
This is sort of like going on about what Maxwell’s equations taught you about painting.
… because your argument leading up to this conclusion seems to me to be steeped in Bayesian thinking through-and-through :). E.g., this:
The expected utility sums for things such as expenditure of resources have a term for resources being kept for the future uses which may be more conditional on evidence, and the actions that are less evidence conditioned than usual ought to lose out to the bulk of possible ways one may act in the future (edit: the ways which you can’t explicitly enumerate).
That’s just some of the thresholds that an optimally programmed intelligence (on a physically plausible computer) would apply.
I’d describe Bayesianism as a belief in powers of qualitative Bayes.
E.g. you seem to actually believe that taking into account low grade evidence, and qualitatively at that, is going to make you form more correct beliefs. No it won’t. Myths about Zeus are weak evidence for great many things, a lot of which would be evidence against Zeus.
The informal algebra of “small”, “a little”, “weak”, “strong”, “a lot”, just doesn’t work for the equations involved, and even if you miraculously used actual real numbers behind those labels, you’d still have enormously huge sums over all the things implied by existence of the myths.
… because your argument leading up to this conclusion seems to me to be steeped in Bayesian thinking through-and-through :).
Firstly, I’m trying to deal just with the things that I am very confident about (computational difficulties), so the inferences are normal logic, and secondarily, I’m trying to persuade you, so I express that in your ideology.
edit: To summarize. You are accustomed to processing evidence1, and to saying that many things are not evidence1. Bayes taught you that everything is evidence2 . You started treating everything as evidence1 because it’s the same word. Whereas evidence1 is evidence that is strong enough and unequivocal enough that a lot of quite rough but absolutely essential approximations work correctly (and it can be more or less usefully processed), and evidence2 is weak and nearly equivocal, all things considering, and those approximations will just plain not work, while exact solutions are too expensive and very complicated even for simple cases such as my Bob example above.
Matter of fact there are thresholds below which extra processing cost does not pay off (or, in case of human head, when it is extremely implausible that the processing is even going to be performed correctly).
Probabilistic reasoning is, in general, computationally expensive. In many situations, a very large number of combinations of uncertain parameters has to be processed, the cross correlations must be accounted for, et cetera.
Actions conditioned on evidence generally have higher expected utility than those not conditioned on evidence, and processing of beliefs conditioned on evidence likewise so.
The expected utility sums for things such as expenditure of resources have a term for resources being kept for the future uses which may be more conditional on evidence, and the actions that are less evidence conditioned than usual ought to lose out to the bulk of possible ways one may act in the future (edit: the ways which you can’t explicitly enumerate).
That’s just some of the thresholds that an optimally programmed intelligence (on a physically plausible computer) would apply.
This is sort of like going on about what Maxwell’s equations taught you about painting. Maxwell’s equations are quite far from painting, about as far in terms of inference length as Bayes theorem is from most actual decision making or belief forming. edit: make that much further, actually, considering that there’s no AI.
Let’s make an example to clarify. There is Bob. Bob being rich would be evidence for him having a good job. Bob having a good job would be evidence for him being rich. Both of those would be evidence with regards to Bob’s education, and so on and so forth. Everything cross correlates to everything else, the belief propagation is NP complete, the algorithms for computing it are very nontrivial, and various subtle implementation errors would make everything converge on a completely wrong value.
Strengths of relevant relations between beliefs about Bob are themselves beliefs, so the graph is pretty damn huge. When known probabilities get fairly close to 0 and 1, it is tractable whenever unknowns are close to 0 or 1. But when they’re closer to the middle, you’re dealing with a very complicated relation. And if you could compute the resulting equation in your head, well, there’s a lot of lesser engineering tasks that you should absolutely breeze through.
edit: expanded a bit. Really, we do know how to progressively approximate from quantum electrodynamics to geometric optics to drawing 3d shapes to painting, but we do not know how to get from Bayes theorem to a full blown AI on physically plausible hardware.
I agree with your points about the value of information. Indeed, as Vaniver said, Bayesianism (i.e., “qualitative Bayes”), together with the idea of expected-utility maximization, makes the importance of VoI especially salient and easy to understand. So I’m a little puzzled by your conclusion that
… because your argument leading up to this conclusion seems to me to be steeped in Bayesian thinking through-and-through :). E.g., this:
I’d describe Bayesianism as a belief in powers of qualitative Bayes.
E.g. you seem to actually believe that taking into account low grade evidence, and qualitatively at that, is going to make you form more correct beliefs. No it won’t. Myths about Zeus are weak evidence for great many things, a lot of which would be evidence against Zeus.
The informal algebra of “small”, “a little”, “weak”, “strong”, “a lot”, just doesn’t work for the equations involved, and even if you miraculously used actual real numbers behind those labels, you’d still have enormously huge sums over all the things implied by existence of the myths.
Firstly, I’m trying to deal just with the things that I am very confident about (computational difficulties), so the inferences are normal logic, and secondarily, I’m trying to persuade you, so I express that in your ideology.
edit: To summarize. You are accustomed to processing evidence1, and to saying that many things are not evidence1. Bayes taught you that everything is evidence2 . You started treating everything as evidence1 because it’s the same word. Whereas evidence1 is evidence that is strong enough and unequivocal enough that a lot of quite rough but absolutely essential approximations work correctly (and it can be more or less usefully processed), and evidence2 is weak and nearly equivocal, all things considering, and those approximations will just plain not work, while exact solutions are too expensive and very complicated even for simple cases such as my Bob example above.