I’m not really in the field, but I am vaguely familiar with the literature and this isn’t how it works (though you might get that impression from reading LW).
A vision algorithm might face the following problem: reality picks an underlying physical scene and an image from some joint distribution. The algorithm looks at the image and must infer something about the scene. In this case, you need to integrate over a huge space to calculate likelihoods, which is generally completely intractable and so requires some algorithmic insight. For example, if you want to estimate the probability that there is an apple on the table, you need to integrate over the astronomically many possible scenes in which there is an apple on the table.
I don’t know if this contradicts you, but this is a problem that biological brain/eye systems have to solve (“inverse optics”), and Steven Pinker has an excellect discussion of it from a Bayesian perspective in his book How the Mind Works. He mentions that the brain does heavily rely on priors that match our environment, which significantly narrows down the possible scenes that could “explain” a given retinal image pair. (You get optical illusions when a scene violates these assumptions.)
There are two parts to the problem: one is designing a model that describes the world well, and the other is using that model to infer things about the world from data. I agree that Bayesian is the correct adjective to apply to this process, but not necessarily that modeling the world is the most interesting part.
I’m not really in the field, but I am vaguely familiar with the literature and this isn’t how it works (though you might get that impression from reading LW).
A vision algorithm might face the following problem: reality picks an underlying physical scene and an image from some joint distribution. The algorithm looks at the image and must infer something about the scene. In this case, you need to integrate over a huge space to calculate likelihoods, which is generally completely intractable and so requires some algorithmic insight. For example, if you want to estimate the probability that there is an apple on the table, you need to integrate over the astronomically many possible scenes in which there is an apple on the table.
I don’t know if this contradicts you, but this is a problem that biological brain/eye systems have to solve (“inverse optics”), and Steven Pinker has an excellect discussion of it from a Bayesian perspective in his book How the Mind Works. He mentions that the brain does heavily rely on priors that match our environment, which significantly narrows down the possible scenes that could “explain” a given retinal image pair. (You get optical illusions when a scene violates these assumptions.)
There are two parts to the problem: one is designing a model that describes the world well, and the other is using that model to infer things about the world from data. I agree that Bayesian is the correct adjective to apply to this process, but not necessarily that modeling the world is the most interesting part.