There is no such thing as starting with a prior that does not contain probability 1, and achieving probability 1 by doing proper Bayesian updates. But I am speaking about something else: including this probability into one’s priors, as an act of faith.
if the students are taught to update priors according to emerging evidence, I can’t see that prior lasting very long.
If you start with probability 1, and do proper Bayesian updating, you end with probability 1. Of course unless you run into a direct contradiction and get a division-by-zero error. But that will never happen, because the contradiction will never be perfect—precisely because nothing can have probability 0, except if you put it into your priors. If a prior probability of something is 1, and you get an evidence which almost contradicts it, and there is only epsilon chance of explaining it by B (whatever horrible thing B is), proper Bayesian updating will just get you to believe B.
As an illustration, imagine a Tegmark multiverse. We are supposed to give each universe a prior probability according to Solomonoff induction. But suppose that we take only a subset of those universes, where some variant of the given faith if true. This subset is non-empty. There is a possible universe where a humanoid being called Yehovah is part of the laws of physics; it’s just an incredibly complex universe, so it has almost zero Solomonoff prior. But if you only take the selected subset of universes as your starting point (this is an arbitrary choice, but it is the only one you ever have to do), updating on any evidence will keep you inside this subset, because any evidence can be explained in some very small part of this subset.
To become rational, you need to be in a state of mind that allows you to develop towards rationality. By a proper act of motivated cognition you can lock yourself out. Some people think that such act (although they call it by a different name) is a right thing to do; fortunately, no one is able to do it perfectly.
There is no such thing as starting with a prior that does not contain probability 1, and achieving probability 1 by doing proper Bayesian updates. But I am speaking about something else: including this probability into one’s priors, as an act of faith.
If you start with probability 1, and do proper Bayesian updating, you end with probability 1. Of course unless you run into a direct contradiction and get a division-by-zero error. But that will never happen, because the contradiction will never be perfect—precisely because nothing can have probability 0, except if you put it into your priors. If a prior probability of something is 1, and you get an evidence which almost contradicts it, and there is only epsilon chance of explaining it by B (whatever horrible thing B is), proper Bayesian updating will just get you to believe B.
As an illustration, imagine a Tegmark multiverse. We are supposed to give each universe a prior probability according to Solomonoff induction. But suppose that we take only a subset of those universes, where some variant of the given faith if true. This subset is non-empty. There is a possible universe where a humanoid being called Yehovah is part of the laws of physics; it’s just an incredibly complex universe, so it has almost zero Solomonoff prior. But if you only take the selected subset of universes as your starting point (this is an arbitrary choice, but it is the only one you ever have to do), updating on any evidence will keep you inside this subset, because any evidence can be explained in some very small part of this subset.
To become rational, you need to be in a state of mind that allows you to develop towards rationality. By a proper act of motivated cognition you can lock yourself out. Some people think that such act (although they call it by a different name) is a right thing to do; fortunately, no one is able to do it perfectly.
sighs in relief