I think this boils down to the https://en.wikipedia.org/wiki/Reference_class_problem. How similar is this novel situation (a distinct quantum configuration than has ever existed in the universe) to whatever you’ve used to come up with a prior?
Are you thinking “are dogs good”, or “are dogs I have encountered on this corner good” or “are dogs wearing a yellow collar that I have encountered on this corner at 3:15pm when it’s not raining good”, or … And, of course, with enough specificity, you have zero examples that will have updated your universal prior. Even if you add second-hand or third-hand data, how many reports of good or bad interactions with dogs of this breed, weight, age, location, and interval since last meal have you used to compare?
This doesn’t make Bayes useless, but you have to understand that this style of probability is about your uncertainty, not about any underlying real thing. Your mental models about categorization and induction are part of your prediction framework, just as much as individual updates (because they tell you how/when to apply updates). Now you get to assign a prior that your model is useful for this update, as well as the update itself. And update the probability that your model did or did not apply using a meta-model based on the outcome. And so on until your finite computing substrate gives up and lets system 1 make a guess.
I think this boils down to the https://en.wikipedia.org/wiki/Reference_class_problem. How similar is this novel situation (a distinct quantum configuration than has ever existed in the universe) to whatever you’ve used to come up with a prior?
Are you thinking “are dogs good”, or “are dogs I have encountered on this corner good” or “are dogs wearing a yellow collar that I have encountered on this corner at 3:15pm when it’s not raining good”, or … And, of course, with enough specificity, you have zero examples that will have updated your universal prior. Even if you add second-hand or third-hand data, how many reports of good or bad interactions with dogs of this breed, weight, age, location, and interval since last meal have you used to compare?
This doesn’t make Bayes useless, but you have to understand that this style of probability is about your uncertainty, not about any underlying real thing. Your mental models about categorization and induction are part of your prediction framework, just as much as individual updates (because they tell you how/when to apply updates). Now you get to assign a prior that your model is useful for this update, as well as the update itself. And update the probability that your model did or did not apply using a meta-model based on the outcome. And so on until your finite computing substrate gives up and lets system 1 make a guess.