Frankly, I think you’re skirting around the real issue: what precisely is the “rationality outreach” supposed to accomplish?
If the goal is to have a community where all false and biased beliefs will be criticized without any exception at all, including those that are held sacred by the present respectable opinion, then it’s inevitable that you’ll cause lots of outrage and end up with opinions on some issues that will sound crackpot or extremist to the respectable mainstream, and will also cause dissension in any realistic group of people. On the other hand, if the choice of issues for criticism will be limited by some cost-benefit calculation, then this calculation depends on the exact goals of the group. Specifically, you should attack those false beliefs that interfere with your goals in practice, and only them. For example, a team of physicists cannot tolerate a member who has crazy ideas about physics, but they shouldn’t have a problem with a member who has crazy ideas about economics (like Albert Einsten, for example).
So, the question is: what exactly is supposed to be the benefit of making your group explicitly atheist, and does it justify the cost of turning off people who will find this doctrine unpleasant or offensive? To answer this question, you must have a precise idea of what exactly the goals of your group are and how they are supposed to be accomplished. It’s basically the same problem that’s seen on LW whenever the issue of suitability of various controversial topics comes up.
The broad goal, I believe, is raising the sanity waterline in the general public.
The narrow goal, as far as I can tell, is developing a solid local community of strong rationalists, and then to do things with our strengths. (Potentially: invent and test systematic methods for making people and the world more awesome, which may include science, starting businesses, making art, and generally creating things that demonstrate the effectiveness of rationality.)
For the narrow goal, we want to appeal to people who will be unusually good assets. (But not necessarily the usual suspects.) For the broad goal, it would be nice to have pitches for rationality that might nudge anyone, regardless of background, in the right direction.
In that case, I’d propose that if Annie is an adult she’s essentially unreachable. No matter how much effort you expend trying to coach her on basic rational skills she isn’t going to get it, because as a general rule adult humans simply don’t change their basic approach to life that radically. You’d have to replace her entire social milieu with a circle of rationalist associates to have any chance of getting through to her, and even then she’d probably just adopt the surface appearance of rationality as a sort of social-acceptance ritual.
So as a practical matter Barbara, Caroline and Donna are the people who might actually join, contribute to and benifit from a rationalist group, and the question becomes how to balance your appeal to all three instead of restricting yourself to Donnas.
Then the question from your post should be asked in the context of the goals you outline.
Regarding the “sanity waterline,” I don’t believe this concept presents a useful and accurate model of people’s beliefs, not even as a rough first approximation. In my opinion, any action based on such a model must be fundamentally misguided one way or another.
Regarding the goal of developing a local community, you’ve listed a whole bunch of goals, and to answer your initial question, we must begin by asking two other questions. First, is there actually a common body of insight, presumably close to what is called “rationality” on LW, that would be of practical help in all of these endeavors, and what would it consist of? Second, if such a body of insight exists and a group of people is trying to reach, share, and apply it, how much of a hindrance is it if they must avoid criticizing religion in the process?
Unless we have clear and well-argued answers to these questions, I don’t think any productive discussion of the original issue is possible. Looking at people’s comments in this thread, it seems to me that their views on the first of the two preliminary questions are muddled by wishful thinking, in the sense that they are too quick to assume that such a body of insight exists and that they have a good idea of what it is.
Regarding the “sanity waterline,” I don’t believe this concept presents a useful and accurate model of people’s beliefs, not even as a rough first approximation. In my opinion, any action based on such a model must be fundamentally misguided one way or another.
You have argued against a misunderstanding of the sanity waterline concept. The idea is sound that people who have and systematically apply a set of skills will not make make mistakes of a certain class. The sanity waterline concept is not simply an ordering of the irrationality of wrong beliefs, but an association of skills with the mistakes they prevent. It does not claim that not making a mistake places someone higher on the waterline so they will not make more irrational mistakes, rather it explicitly calls out the distinction between getting something right because your rationality skills force you to get it right, and other means such as joining the social group that happens to be right.
You are right. My thinking was indeed imprecise here. If we assume that there exists a set of skills such that each skill, if practiced consistently, prevents one from having a specific set of irrational beliefs, then we can impose a partial order on sets of beliefs by observing which set of skills is implied to be absent by each set of beliefs. This partial order can be seen as a ranking of irrationality of different sets of beliefs, and the set of skills shared by a group of people places a lower bound with respect to the partial order, which can then be metaphorically called a “waterline.”
Of course, the crucial assumption here is that it is possible for humans to acquire a set of reasoning skills so thoroughly and reliably that they will actually apply them to all issues, no matter what. I don’t think this is possible, and with this in mind, I still don’t think the “waterline” concept is useful. If anything, it’s dangerous because people may fall into the trap of thinking that they are above a certain waterline, whereas in reality, there are issues where due to all kinds of biases even the very basic skills are failing them.
If anything, it’s dangerous because people may fall into the trap of thinking that they are above a certain waterline, whereas in reality, there are issues where due to all kinds of biases even the very basic skills are failing them.
This is the more general problem of a little knowledge being a dangerous thing (when you think it’s a lot). I find it useful to remind myself of the many ways in which, despite my considerable intelligence, I am extremely stupid. My girlfriend is also helpful in this.
Of course, the crucial assumption here is that it is possible for humans to acquire a set of reasoning skills so thoroughly and reliably that they will actually apply them to all issues, no matter what. I don’t think this is possible, and with this in mind, I still don’t think the “waterline” concept is useful.
Fallacy of gray. Even if there are no actual magical superrationalists, clearly some people are better skilled than others, and a group of people would behave differently depending on this level.
Fallacy of gray. Even if there are no actual magical superrationalists, clearly some people are better skilled than others, and a group of people would behave differently depending on this level.
The question is whether it is possible in practice for individuals or groups to exist who really apply some set of skills with enough consistency that “sanity waterline” becomes a good enough approximation of reality for them. If individuals and groups differ greatly, as they obviously do, it may still be that nobody is good enough that their basic skills would be highly (even if imperfectly) reliable when it comes to the most seductive biases. Even if this assumption is not true, it does not represent the fallacy of grey, no more than, say, claiming that nobody can run 100m in less than 9.5s means equating athletes with couch potatoes. (The latter claim may be falsified if someone actually manages to run that fast, but even if false, it’s not a fallacy of grey, since it merely asserts an upper bound for achievement, not that there aren’t people far closer to it than others.)
Now, I do believe that there are plenty of topics where even the most rational individuals are in serious danger of having their most basic epistemological skills distorted by biases, and therefore, it’s never a good idea to draw any “sanity waterlines.” You may disagree with this view, but not on the grounds that it constitutes fallacy of grey.
Now, I do believe that there are plenty of topics where even the most rational individuals are in serious danger of having their most basic epistemological skills distorted by biases, and therefore, it’s never a good idea to draw any “sanity waterlines.”
You clearly don’t understand the concept in the way it was intended, and instead criticize a different idea.
You clearly don’t understand the concept in the way it was intended, and instead criticize a different idea.
I allow for that possibility, but I don’t see where my understanding goes wrong (given the correction I made after JGWeissman’s criticism that I conceded). So without further clarification on your part, I have to rest my case at this point.
Of course, the crucial assumption here is that it is possible for humans to acquire a set of reasoning skills so thoroughly and reliably that they will actually apply them to all issues, no matter what. I don’t think this is possible, and with this in mind, I still don’t think the “waterline” concept is useful.
Keep in mind that this concept was introduced in the context of teaching others. The practical advice is to teach skills that will enable people to give up their false beliefs rather than directly arguing against the false beliefs, both because emotional attachment makes a direct attack more difficult, and because the particular false beliefs you observe are indicators of a larger problem. This does not require the most extreme case that the person will universally apply the skill in all situations no matter what, though the more reliably the person uses the skill, the better it works. If using a set of skill 90% of the time makes a upperbound of 10% probability of making any instance from a class of mistakes, that is not as good as using the skill all the time and never making that kind of mistake, but it is still useful.
If anything, it’s dangerous because people may fall into the trap of thinking that they are above a certain waterline, whereas in reality, there are issues where due to all kinds of biases, even the very basic skills are failing them.
Again, this is a technique for teaching. Don’t use it as an excuse to trust yourself.
Frankly, I think you’re skirting around the real issue: what precisely is the “rationality outreach” supposed to accomplish?
If the goal is to have a community where all false and biased beliefs will be criticized without any exception at all, including those that are held sacred by the present respectable opinion, then it’s inevitable that you’ll cause lots of outrage and end up with opinions on some issues that will sound crackpot or extremist to the respectable mainstream, and will also cause dissension in any realistic group of people. On the other hand, if the choice of issues for criticism will be limited by some cost-benefit calculation, then this calculation depends on the exact goals of the group. Specifically, you should attack those false beliefs that interfere with your goals in practice, and only them. For example, a team of physicists cannot tolerate a member who has crazy ideas about physics, but they shouldn’t have a problem with a member who has crazy ideas about economics (like Albert Einsten, for example).
So, the question is: what exactly is supposed to be the benefit of making your group explicitly atheist, and does it justify the cost of turning off people who will find this doctrine unpleasant or offensive? To answer this question, you must have a precise idea of what exactly the goals of your group are and how they are supposed to be accomplished. It’s basically the same problem that’s seen on LW whenever the issue of suitability of various controversial topics comes up.
The broad goal, I believe, is raising the sanity waterline in the general public.
The narrow goal, as far as I can tell, is developing a solid local community of strong rationalists, and then to do things with our strengths. (Potentially: invent and test systematic methods for making people and the world more awesome, which may include science, starting businesses, making art, and generally creating things that demonstrate the effectiveness of rationality.)
For the narrow goal, we want to appeal to people who will be unusually good assets. (But not necessarily the usual suspects.) For the broad goal, it would be nice to have pitches for rationality that might nudge anyone, regardless of background, in the right direction.
In that case, I’d propose that if Annie is an adult she’s essentially unreachable. No matter how much effort you expend trying to coach her on basic rational skills she isn’t going to get it, because as a general rule adult humans simply don’t change their basic approach to life that radically. You’d have to replace her entire social milieu with a circle of rationalist associates to have any chance of getting through to her, and even then she’d probably just adopt the surface appearance of rationality as a sort of social-acceptance ritual.
So as a practical matter Barbara, Caroline and Donna are the people who might actually join, contribute to and benifit from a rationalist group, and the question becomes how to balance your appeal to all three instead of restricting yourself to Donnas.
Then the question from your post should be asked in the context of the goals you outline.
Regarding the “sanity waterline,” I don’t believe this concept presents a useful and accurate model of people’s beliefs, not even as a rough first approximation. In my opinion, any action based on such a model must be fundamentally misguided one way or another.
Regarding the goal of developing a local community, you’ve listed a whole bunch of goals, and to answer your initial question, we must begin by asking two other questions. First, is there actually a common body of insight, presumably close to what is called “rationality” on LW, that would be of practical help in all of these endeavors, and what would it consist of? Second, if such a body of insight exists and a group of people is trying to reach, share, and apply it, how much of a hindrance is it if they must avoid criticizing religion in the process?
Unless we have clear and well-argued answers to these questions, I don’t think any productive discussion of the original issue is possible. Looking at people’s comments in this thread, it seems to me that their views on the first of the two preliminary questions are muddled by wishful thinking, in the sense that they are too quick to assume that such a body of insight exists and that they have a good idea of what it is.
You have argued against a misunderstanding of the sanity waterline concept. The idea is sound that people who have and systematically apply a set of skills will not make make mistakes of a certain class. The sanity waterline concept is not simply an ordering of the irrationality of wrong beliefs, but an association of skills with the mistakes they prevent. It does not claim that not making a mistake places someone higher on the waterline so they will not make more irrational mistakes, rather it explicitly calls out the distinction between getting something right because your rationality skills force you to get it right, and other means such as joining the social group that happens to be right.
You are right. My thinking was indeed imprecise here. If we assume that there exists a set of skills such that each skill, if practiced consistently, prevents one from having a specific set of irrational beliefs, then we can impose a partial order on sets of beliefs by observing which set of skills is implied to be absent by each set of beliefs. This partial order can be seen as a ranking of irrationality of different sets of beliefs, and the set of skills shared by a group of people places a lower bound with respect to the partial order, which can then be metaphorically called a “waterline.”
Of course, the crucial assumption here is that it is possible for humans to acquire a set of reasoning skills so thoroughly and reliably that they will actually apply them to all issues, no matter what. I don’t think this is possible, and with this in mind, I still don’t think the “waterline” concept is useful. If anything, it’s dangerous because people may fall into the trap of thinking that they are above a certain waterline, whereas in reality, there are issues where due to all kinds of biases even the very basic skills are failing them.
This is the more general problem of a little knowledge being a dangerous thing (when you think it’s a lot). I find it useful to remind myself of the many ways in which, despite my considerable intelligence, I am extremely stupid. My girlfriend is also helpful in this.
(smile)
Yes, this.
Among the great blessings of my life are the many people in it who can remind me of my stupidity.
Fallacy of gray. Even if there are no actual magical superrationalists, clearly some people are better skilled than others, and a group of people would behave differently depending on this level.
The question is whether it is possible in practice for individuals or groups to exist who really apply some set of skills with enough consistency that “sanity waterline” becomes a good enough approximation of reality for them. If individuals and groups differ greatly, as they obviously do, it may still be that nobody is good enough that their basic skills would be highly (even if imperfectly) reliable when it comes to the most seductive biases. Even if this assumption is not true, it does not represent the fallacy of grey, no more than, say, claiming that nobody can run 100m in less than 9.5s means equating athletes with couch potatoes. (The latter claim may be falsified if someone actually manages to run that fast, but even if false, it’s not a fallacy of grey, since it merely asserts an upper bound for achievement, not that there aren’t people far closer to it than others.)
Now, I do believe that there are plenty of topics where even the most rational individuals are in serious danger of having their most basic epistemological skills distorted by biases, and therefore, it’s never a good idea to draw any “sanity waterlines.” You may disagree with this view, but not on the grounds that it constitutes fallacy of grey.
You clearly don’t understand the concept in the way it was intended, and instead criticize a different idea.
I allow for that possibility, but I don’t see where my understanding goes wrong (given the correction I made after JGWeissman’s criticism that I conceded). So without further clarification on your part, I have to rest my case at this point.
Keep in mind that this concept was introduced in the context of teaching others. The practical advice is to teach skills that will enable people to give up their false beliefs rather than directly arguing against the false beliefs, both because emotional attachment makes a direct attack more difficult, and because the particular false beliefs you observe are indicators of a larger problem. This does not require the most extreme case that the person will universally apply the skill in all situations no matter what, though the more reliably the person uses the skill, the better it works. If using a set of skill 90% of the time makes a upperbound of 10% probability of making any instance from a class of mistakes, that is not as good as using the skill all the time and never making that kind of mistake, but it is still useful.
Again, this is a technique for teaching. Don’t use it as an excuse to trust yourself.