See, after locating the hypothesis, we can run some simple statistical checks on the hypothesis and the data to see if our prior was wrong. For example, plot the data as a histogram, and plot the hypothesis as another histogram, and if there’s a lot of data and the two histograms are wildly different, we know almost for certain that the prior was wrong. As a responsible scientist, I’d do this kind of check. The catch is, a perfect Bayesian wouldn’t. The question is, why?
I thought that what I’m about to say is standard, but perhaps it isn’t.
Bayesian inference, depending on how detailed you do it, does include such a check. You construct a Bayes network (as a directed acyclic graph) that connects beliefs with anticipated observations (or intermediate other beliefs), establishing marginal and conditional probabilities for the nodes. As your expectations are jointly determined by the beliefs that lead up to them, then getting a wrong answer will knock down the probabilities you assign to the beliefs leading up to them.
Depending on the relative strengths of the connections, you know whether to reject your parameters, your model, or the validity of the observation. (Depending on how detailed the network is, one input belief might be “i’m hallucinating or insane”, which may survive with the highest probability.) This determination is based on which of them, after taking this hit, has the lowest probability.
Pearl also has written Bayesian algorithms for inferring conditional (in)dependencies from data, and therefore what kinds of models are capable of capturing a phenomenon. He furthermore has proposed causal networks, which have explicit causal and (oppositely) inferential directions. In that case, you don’t turn a prior into a posterior: rather, the odds you assign to an event at a node are determined by the “incoming” causal “message”, and, from the other direction, the incoming inferential message.
But neither “model checking” nor Bayesian methods will come up with hypotheses for you. Model checking can attenuate the odds you assign to wrong priors, but so can Bayesian updating. The catch is that, for reasons of computation, a Bayesian might not be able to list all the possible hypotheses and arbitrarily restrict the hypothesis space, and potentially be left with only bad ones. But Bayesians aren’t alone in that either.
(Please tell me if this sounds too True Believerish.)
I thought that what I’m about to say is standard, but perhaps it isn’t. [...] Pearl also has written Bayesian algorithms
I have been googling for references to “computational epistemology”, “algorithmic epistemology”, “bayesian algorithms” and “epistemic algorithm” on LessWrong, and (other than my article) this is the only reference I was able to find to things in the vague category of (i) proposing that the community work on writing real, practical epistemic algorithms (i.e. in software), (ii) announcing having written epistemic algorithms or (iii) explaining how precisely to perform any epistemic algorithm in particular. (A runner-up is this post which aspires to “focus on the ideal epistemic algorithm” but AFAICT doesn’t really describe an algorithm.)
Oh wow, thanks. I think at the time I was overconfident that some more educated Bayesian had worked through the details of what I was describing. But the causality-related stuff is definitely covered by Judea Pearl (the Pearl I was referring to) in his book *Causality* (2000).
I thought that what I’m about to say is standard, but perhaps it isn’t.
Bayesian inference, depending on how detailed you do it, does include such a check. You construct a Bayes network (as a directed acyclic graph) that connects beliefs with anticipated observations (or intermediate other beliefs), establishing marginal and conditional probabilities for the nodes. As your expectations are jointly determined by the beliefs that lead up to them, then getting a wrong answer will knock down the probabilities you assign to the beliefs leading up to them.
Depending on the relative strengths of the connections, you know whether to reject your parameters, your model, or the validity of the observation. (Depending on how detailed the network is, one input belief might be “i’m hallucinating or insane”, which may survive with the highest probability.) This determination is based on which of them, after taking this hit, has the lowest probability.
Pearl also has written Bayesian algorithms for inferring conditional (in)dependencies from data, and therefore what kinds of models are capable of capturing a phenomenon. He furthermore has proposed causal networks, which have explicit causal and (oppositely) inferential directions. In that case, you don’t turn a prior into a posterior: rather, the odds you assign to an event at a node are determined by the “incoming” causal “message”, and, from the other direction, the incoming inferential message.
But neither “model checking” nor Bayesian methods will come up with hypotheses for you. Model checking can attenuate the odds you assign to wrong priors, but so can Bayesian updating. The catch is that, for reasons of computation, a Bayesian might not be able to list all the possible hypotheses and arbitrarily restrict the hypothesis space, and potentially be left with only bad ones. But Bayesians aren’t alone in that either.
(Please tell me if this sounds too True Believerish.)
I have been googling for references to “computational epistemology”, “algorithmic epistemology”, “bayesian algorithms” and “epistemic algorithm” on LessWrong, and (other than my article) this is the only reference I was able to find to things in the vague category of (i) proposing that the community work on writing real, practical epistemic algorithms (i.e. in software), (ii) announcing having written epistemic algorithms or (iii) explaining how precisely to perform any epistemic algorithm in particular. (A runner-up is this post which aspires to “focus on the ideal epistemic algorithm” but AFAICT doesn’t really describe an algorithm.)
Who is “Pearl”?
Oh wow, thanks. I think at the time I was overconfident that some more educated Bayesian had worked through the details of what I was describing. But the causality-related stuff is definitely covered by Judea Pearl (the Pearl I was referring to) in his book *Causality* (2000).