A scary thought—is it really such a good idea to make a rationality textbook? You know, knowing about biases can hurt people. Even if rationality would be taught everywhere, it would not necessarily mean a global increase in rationality. People with motivated cognition would use the techniques to improve their discussion skills.
Actually, if a rationality textbook became popular, I would expect many religious groups to come with their own versions. Essentially, all you have to do is choose a different prior: one that gives probability 1 to your sacred teachings; then you can go on and be a good Bayesian.
I don’t think a rationality textbook would make things worse. I just suspect that its positive effects could be easily neutralized. So even if such textbook helps people who really want to be rational, if we expect it to change the society in larger scale, we could be disappointed.
Essentially, all you have to do is choose a different prior: one that gives probability 1 to your sacred teachings; then you can go on and be a good Bayesian.
There is no such thing as probablility 1, and, if the students are taught to update priors according to emerging evidence, I can’t see that prior lasting very long.
There is no such thing as starting with a prior that does not contain probability 1, and achieving probability 1 by doing proper Bayesian updates. But I am speaking about something else: including this probability into one’s priors, as an act of faith.
if the students are taught to update priors according to emerging evidence, I can’t see that prior lasting very long.
If you start with probability 1, and do proper Bayesian updating, you end with probability 1. Of course unless you run into a direct contradiction and get a division-by-zero error. But that will never happen, because the contradiction will never be perfect—precisely because nothing can have probability 0, except if you put it into your priors. If a prior probability of something is 1, and you get an evidence which almost contradicts it, and there is only epsilon chance of explaining it by B (whatever horrible thing B is), proper Bayesian updating will just get you to believe B.
As an illustration, imagine a Tegmark multiverse. We are supposed to give each universe a prior probability according to Solomonoff induction. But suppose that we take only a subset of those universes, where some variant of the given faith if true. This subset is non-empty. There is a possible universe where a humanoid being called Yehovah is part of the laws of physics; it’s just an incredibly complex universe, so it has almost zero Solomonoff prior. But if you only take the selected subset of universes as your starting point (this is an arbitrary choice, but it is the only one you ever have to do), updating on any evidence will keep you inside this subset, because any evidence can be explained in some very small part of this subset.
To become rational, you need to be in a state of mind that allows you to develop towards rationality. By a proper act of motivated cognition you can lock yourself out. Some people think that such act (although they call it by a different name) is a right thing to do; fortunately, no one is able to do it perfectly.
A scary thought—is it really such a good idea to make a rationality textbook? You know, knowing about biases can hurt people. Even if rationality would be taught everywhere, it would not necessarily mean a global increase in rationality. People with motivated cognition would use the techniques to improve their discussion skills.
Actually, if a rationality textbook became popular, I would expect many religious groups to come with their own versions. Essentially, all you have to do is choose a different prior: one that gives probability 1 to your sacred teachings; then you can go on and be a good Bayesian.
I don’t think a rationality textbook would make things worse. I just suspect that its positive effects could be easily neutralized. So even if such textbook helps people who really want to be rational, if we expect it to change the society in larger scale, we could be disappointed.
There is no such thing as probablility 1, and, if the students are taught to update priors according to emerging evidence, I can’t see that prior lasting very long.
There is no such thing as starting with a prior that does not contain probability 1, and achieving probability 1 by doing proper Bayesian updates. But I am speaking about something else: including this probability into one’s priors, as an act of faith.
If you start with probability 1, and do proper Bayesian updating, you end with probability 1. Of course unless you run into a direct contradiction and get a division-by-zero error. But that will never happen, because the contradiction will never be perfect—precisely because nothing can have probability 0, except if you put it into your priors. If a prior probability of something is 1, and you get an evidence which almost contradicts it, and there is only epsilon chance of explaining it by B (whatever horrible thing B is), proper Bayesian updating will just get you to believe B.
As an illustration, imagine a Tegmark multiverse. We are supposed to give each universe a prior probability according to Solomonoff induction. But suppose that we take only a subset of those universes, where some variant of the given faith if true. This subset is non-empty. There is a possible universe where a humanoid being called Yehovah is part of the laws of physics; it’s just an incredibly complex universe, so it has almost zero Solomonoff prior. But if you only take the selected subset of universes as your starting point (this is an arbitrary choice, but it is the only one you ever have to do), updating on any evidence will keep you inside this subset, because any evidence can be explained in some very small part of this subset.
To become rational, you need to be in a state of mind that allows you to develop towards rationality. By a proper act of motivated cognition you can lock yourself out. Some people think that such act (although they call it by a different name) is a right thing to do; fortunately, no one is able to do it perfectly.
sighs in relief