Agreed, but the priors can in principle be strong enough that hypothesis A will always be favored over B no matter how much data you have, even though B gives an orders of magnitude higher P(data|B) than P(data|A).
Is there a bound on the amount of data that is necessary to adjust a prior of a given error magnitude? Likewise, if the probability is the result of a changing system I presume it could well be the case that the pdf estimates will be consistently inaccurate as they are constantly adjusting to events whose local probability is changing. Does the Bayesian approach help, over say, model fitting to arbitrary samples? Is it, in effect, an example of a model fitting strategy no more reasonable than any other?
Hopefully an AI will be able to get its hands on large amounts of data. Once it has that, it doesn’t matter very much what its priors were.
Agreed, but the priors can in principle be strong enough that hypothesis A will always be favored over B no matter how much data you have, even though B gives an orders of magnitude higher P(data|B) than P(data|A).
Is there a bound on the amount of data that is necessary to adjust a prior of a given error magnitude? Likewise, if the probability is the result of a changing system I presume it could well be the case that the pdf estimates will be consistently inaccurate as they are constantly adjusting to events whose local probability is changing. Does the Bayesian approach help, over say, model fitting to arbitrary samples? Is it, in effect, an example of a model fitting strategy no more reasonable than any other?