Regarding the “sanity waterline,” I don’t believe this concept presents a useful and accurate model of people’s beliefs, not even as a rough first approximation. In my opinion, any action based on such a model must be fundamentally misguided one way or another.
You have argued against a misunderstanding of the sanity waterline concept. The idea is sound that people who have and systematically apply a set of skills will not make make mistakes of a certain class. The sanity waterline concept is not simply an ordering of the irrationality of wrong beliefs, but an association of skills with the mistakes they prevent. It does not claim that not making a mistake places someone higher on the waterline so they will not make more irrational mistakes, rather it explicitly calls out the distinction between getting something right because your rationality skills force you to get it right, and other means such as joining the social group that happens to be right.
You are right. My thinking was indeed imprecise here. If we assume that there exists a set of skills such that each skill, if practiced consistently, prevents one from having a specific set of irrational beliefs, then we can impose a partial order on sets of beliefs by observing which set of skills is implied to be absent by each set of beliefs. This partial order can be seen as a ranking of irrationality of different sets of beliefs, and the set of skills shared by a group of people places a lower bound with respect to the partial order, which can then be metaphorically called a “waterline.”
Of course, the crucial assumption here is that it is possible for humans to acquire a set of reasoning skills so thoroughly and reliably that they will actually apply them to all issues, no matter what. I don’t think this is possible, and with this in mind, I still don’t think the “waterline” concept is useful. If anything, it’s dangerous because people may fall into the trap of thinking that they are above a certain waterline, whereas in reality, there are issues where due to all kinds of biases even the very basic skills are failing them.
If anything, it’s dangerous because people may fall into the trap of thinking that they are above a certain waterline, whereas in reality, there are issues where due to all kinds of biases even the very basic skills are failing them.
This is the more general problem of a little knowledge being a dangerous thing (when you think it’s a lot). I find it useful to remind myself of the many ways in which, despite my considerable intelligence, I am extremely stupid. My girlfriend is also helpful in this.
Of course, the crucial assumption here is that it is possible for humans to acquire a set of reasoning skills so thoroughly and reliably that they will actually apply them to all issues, no matter what. I don’t think this is possible, and with this in mind, I still don’t think the “waterline” concept is useful.
Fallacy of gray. Even if there are no actual magical superrationalists, clearly some people are better skilled than others, and a group of people would behave differently depending on this level.
Fallacy of gray. Even if there are no actual magical superrationalists, clearly some people are better skilled than others, and a group of people would behave differently depending on this level.
The question is whether it is possible in practice for individuals or groups to exist who really apply some set of skills with enough consistency that “sanity waterline” becomes a good enough approximation of reality for them. If individuals and groups differ greatly, as they obviously do, it may still be that nobody is good enough that their basic skills would be highly (even if imperfectly) reliable when it comes to the most seductive biases. Even if this assumption is not true, it does not represent the fallacy of grey, no more than, say, claiming that nobody can run 100m in less than 9.5s means equating athletes with couch potatoes. (The latter claim may be falsified if someone actually manages to run that fast, but even if false, it’s not a fallacy of grey, since it merely asserts an upper bound for achievement, not that there aren’t people far closer to it than others.)
Now, I do believe that there are plenty of topics where even the most rational individuals are in serious danger of having their most basic epistemological skills distorted by biases, and therefore, it’s never a good idea to draw any “sanity waterlines.” You may disagree with this view, but not on the grounds that it constitutes fallacy of grey.
Now, I do believe that there are plenty of topics where even the most rational individuals are in serious danger of having their most basic epistemological skills distorted by biases, and therefore, it’s never a good idea to draw any “sanity waterlines.”
You clearly don’t understand the concept in the way it was intended, and instead criticize a different idea.
You clearly don’t understand the concept in the way it was intended, and instead criticize a different idea.
I allow for that possibility, but I don’t see where my understanding goes wrong (given the correction I made after JGWeissman’s criticism that I conceded). So without further clarification on your part, I have to rest my case at this point.
Of course, the crucial assumption here is that it is possible for humans to acquire a set of reasoning skills so thoroughly and reliably that they will actually apply them to all issues, no matter what. I don’t think this is possible, and with this in mind, I still don’t think the “waterline” concept is useful.
Keep in mind that this concept was introduced in the context of teaching others. The practical advice is to teach skills that will enable people to give up their false beliefs rather than directly arguing against the false beliefs, both because emotional attachment makes a direct attack more difficult, and because the particular false beliefs you observe are indicators of a larger problem. This does not require the most extreme case that the person will universally apply the skill in all situations no matter what, though the more reliably the person uses the skill, the better it works. If using a set of skill 90% of the time makes a upperbound of 10% probability of making any instance from a class of mistakes, that is not as good as using the skill all the time and never making that kind of mistake, but it is still useful.
If anything, it’s dangerous because people may fall into the trap of thinking that they are above a certain waterline, whereas in reality, there are issues where due to all kinds of biases, even the very basic skills are failing them.
Again, this is a technique for teaching. Don’t use it as an excuse to trust yourself.
You have argued against a misunderstanding of the sanity waterline concept. The idea is sound that people who have and systematically apply a set of skills will not make make mistakes of a certain class. The sanity waterline concept is not simply an ordering of the irrationality of wrong beliefs, but an association of skills with the mistakes they prevent. It does not claim that not making a mistake places someone higher on the waterline so they will not make more irrational mistakes, rather it explicitly calls out the distinction between getting something right because your rationality skills force you to get it right, and other means such as joining the social group that happens to be right.
You are right. My thinking was indeed imprecise here. If we assume that there exists a set of skills such that each skill, if practiced consistently, prevents one from having a specific set of irrational beliefs, then we can impose a partial order on sets of beliefs by observing which set of skills is implied to be absent by each set of beliefs. This partial order can be seen as a ranking of irrationality of different sets of beliefs, and the set of skills shared by a group of people places a lower bound with respect to the partial order, which can then be metaphorically called a “waterline.”
Of course, the crucial assumption here is that it is possible for humans to acquire a set of reasoning skills so thoroughly and reliably that they will actually apply them to all issues, no matter what. I don’t think this is possible, and with this in mind, I still don’t think the “waterline” concept is useful. If anything, it’s dangerous because people may fall into the trap of thinking that they are above a certain waterline, whereas in reality, there are issues where due to all kinds of biases even the very basic skills are failing them.
This is the more general problem of a little knowledge being a dangerous thing (when you think it’s a lot). I find it useful to remind myself of the many ways in which, despite my considerable intelligence, I am extremely stupid. My girlfriend is also helpful in this.
(smile)
Yes, this.
Among the great blessings of my life are the many people in it who can remind me of my stupidity.
Fallacy of gray. Even if there are no actual magical superrationalists, clearly some people are better skilled than others, and a group of people would behave differently depending on this level.
The question is whether it is possible in practice for individuals or groups to exist who really apply some set of skills with enough consistency that “sanity waterline” becomes a good enough approximation of reality for them. If individuals and groups differ greatly, as they obviously do, it may still be that nobody is good enough that their basic skills would be highly (even if imperfectly) reliable when it comes to the most seductive biases. Even if this assumption is not true, it does not represent the fallacy of grey, no more than, say, claiming that nobody can run 100m in less than 9.5s means equating athletes with couch potatoes. (The latter claim may be falsified if someone actually manages to run that fast, but even if false, it’s not a fallacy of grey, since it merely asserts an upper bound for achievement, not that there aren’t people far closer to it than others.)
Now, I do believe that there are plenty of topics where even the most rational individuals are in serious danger of having their most basic epistemological skills distorted by biases, and therefore, it’s never a good idea to draw any “sanity waterlines.” You may disagree with this view, but not on the grounds that it constitutes fallacy of grey.
You clearly don’t understand the concept in the way it was intended, and instead criticize a different idea.
I allow for that possibility, but I don’t see where my understanding goes wrong (given the correction I made after JGWeissman’s criticism that I conceded). So without further clarification on your part, I have to rest my case at this point.
Keep in mind that this concept was introduced in the context of teaching others. The practical advice is to teach skills that will enable people to give up their false beliefs rather than directly arguing against the false beliefs, both because emotional attachment makes a direct attack more difficult, and because the particular false beliefs you observe are indicators of a larger problem. This does not require the most extreme case that the person will universally apply the skill in all situations no matter what, though the more reliably the person uses the skill, the better it works. If using a set of skill 90% of the time makes a upperbound of 10% probability of making any instance from a class of mistakes, that is not as good as using the skill all the time and never making that kind of mistake, but it is still useful.
Again, this is a technique for teaching. Don’t use it as an excuse to trust yourself.