Good point. Mathematically I’d say this: there are actually a lot of competing alternative theories. “almost nothing ever happens”—is also a competing theory. From Solomonoff’s induction we know that
P(event|history) = integral_{all_theories} P(event|theory)*P(history|theory)P(theory) d theory
it basically means, that we should weight each theory by the factor P(history|theory) - probability of our entire history of past observations given the theory. What you’re saying, is that if a theory is very precise, then P(history|theory) will only be high if history matches theory very well. This is why imprecise theories will have bigger weight than precise, but wrong theories. Theory “almost nothing ever happens” is very imprecise, but it is exactly why its factor P(history|theory) will often be bigger than the weight of a precise but incorrect theory. I guess normies grasp it intuitively.
Good point. Mathematically I’d say this: there are actually a lot of competing alternative theories. “almost nothing ever happens”—is also a competing theory. From Solomonoff’s induction we know that
P(event|history) = integral_{all_theories} P(event|theory)*P(history|theory)P(theory) d theory
it basically means, that we should weight each theory by the factor P(history|theory) - probability of our entire history of past observations given the theory.
What you’re saying, is that if a theory is very precise, then P(history|theory) will only be high if history matches theory very well. This is why imprecise theories will have bigger weight than precise, but wrong theories. Theory “almost nothing ever happens” is very imprecise, but it is exactly why its factor P(history|theory) will often be bigger than the weight of a precise but incorrect theory. I guess normies grasp it intuitively.