Michael, you can’t have a prediction market without a way to pay off the bets, and if you have that you can measure personal accuracy directly, if you just wait.
Robin_Hanson2
Psychologists have developed a self-deception test, which includes questions like “Did you ever hate your mother,” on the assumption that most everyone did but few want to admit it. See:
Paulhus, Delroy L. “Self-deception and Impression Management in Test Responses.” In Angleitner, A. & Wiggins, J. S., Personality assessment Via Questionnaires. New York, NY: Springer, 1986, 143-165.
Perhaps questions like those might be a good part of a rationality test and practice regime .
We seem to mostly agree about what we are about here, but it seems damn hard to very precisely define exactly what. I guess I’ll focus on coming up with concrete examples of bias and concrete mechanisms for avoiding it, and set aside for now the difficult task of defining it.
Pdf, I didn’t mean to imply that Eliezer’s approach was inferior to the approach I was taking, just that all the approaches run into problems when you try to become more precise.
Most abstract beliefs most people have make pretty much no difference to their actions. They hold those beliefs not to advise action but to help them think and talk about interesting topics, so they can win friends (and mates and employers) and influence people. For these purposes, changing their minds may well not usually be a good deal.
pdf, yes, by “abstract” I mean about large abstractions, rather than the specifics of daily life. Some abstractions are useful of course, but most of them are only tenuously related to daily life.
Eliezer, I just meant to point out that while your advice is great for someone who really cares about reducing belief error, it may understandably not be of much use for the usual purposes of most not-directly-practical conversations. Unfortunately this may well be the case for most of the advice we offer here at Overcoming Bias.
Eliezer, perhaps we find your argument so clear and persuasive that we don’t have much to say about it directly, but we want to comment on something so all will see we are paying attention. Perhaps blogs comments need some sort of smiley nodding icon option, letting us indicate our pleasure with your post without needing words. :)
This is of course a topic I have a lot to say about, but alas I left on a trip just before Eliezer and Hal made their posts, and I’m going to be pretty busy for the next few days. Just wanted all to know I will get around to responding when I get time.
(I finally have time to reply; sorry for the long delay.)
Eliezer, one can reasonably criticize a belief without needing to give an exact algorithm for always and exactly computing the best possible belief in such situations. Imagine you said P(A) = .3 and P(notA) = .9, and I criticized you for not satisfying P(A)+P(notA) = 1. If you were to demand that I tell you what to believe instead I might suggest you renormalize, and assign P(A) = 3⁄12 and P(notA) = 9⁄12. To that you might reply that those are most certainly not always the best exact numbers to assign. You know of examples where the right thing to do was clearly to assign P(A) = .3 and P(notA) = .7. But surely this would not be a reasonable response to my criticism. Similarly, I can criticize disagreement without offering an exact algorithm which always computes the best way to resolve the disagreement. I would suggest you both moving to a middle belief in the same spirit I might suggest renormalizing when things don’t sum to one, as a demonstration that apparently reasonable options are available.
Eliezer, you describe cases of dreamers vs folks awake, of super-intelligences vs schizophrenics who think they are God, of creationists vs. their opponents, and of a Verizon customer vs customer support, all as cases where it can be very reasonably obvious to one side that the other side is completely wrong. The question of course is what exactly identifies such cases, so that you can tell if you are in such a situation at any given moment.
Clearly, people having the mere impression that they are in such a situation is a very unreliable indicator. So if not such an impression, what exactly does justify each of these exemplars in thinking they are obviously right?
It seems to me that even if we grant the possibility of such cases, we must admit that people are far too quick to assume that they are in such cases. So until we can find a reason to think we succumb to this problem less than others, we should try to invoke this explanation less often than we were initially inclined too.
Carl, I’d class schizophrenics with parrots and chatbots; creatures so obviously broken to so many observers that self-interested bias is plausibly a minor factor in our belifs about their rationality. For creationists I want to instead say that I usually have to go with the rough middle of expert opinion, and that goes against creationists. But there is the thorny issue of who exactly are the right experts for this topic. For Verizon I’d say what we are seeing is subject to a selection effect; probably usually Verizon is right and the customers are wrong.
I guess the main issue raised here is not what to believe, but what to say to people who may misinterpret you. In principle any answer might be justified, though I worry that people will try to excuse their irrational beliefs by saying that they are just talking that way to deal with irrational others.
Paul, the claim isn’t that “I don’t know” is never right. The claim is that you should only say it when it is true.
I have had this experience several times in my life; I come across clear enough evidence that settles for me an issue I had seen long disputed. At that point my choice is to either go back and try to persuade disputants, or to continue on to explore the new issues that this settlement raises. As Eliezer implicitly advises, after a short detour to tell a few disputants, I have usually chosen this second route. This is one explanation for the existence of settled but still disputed issues; people who learn the answer leave the conversation.
- 2 Feb 2024 9:39 UTC; 16 points) 's comment on Wrong answer bias by (
- I Stand by the Sequences by 15 May 2012 10:21 UTC; 14 points) (
- 24 Aug 2019 0:56 UTC; 13 points) 's comment on Is LW making progress? by (
- 5 Jan 2021 14:53 UTC; 2 points) 's comment on 100 Tips for a Better Life by (
Matt, yes of course, one should be very cautious about drawing conclusions contrary to a large community of discussion.
People can fool themselves, hallucinate, and even go insane.
This is the key point—when people have crazy moments then compared to non crazy moments they are more likely to make extreme claims, and this is what makes it reasonable to be skeptical of extreme claims.
Academic communities can also have crazy moments, which is why we should be more skeptical of extreme published claims.
Given your emphasis on replication, it becomes extremely important to evaluate how an academic field actually practices, rewards and interprets attempts at replication. They may give lip service only.
Eliezer, I think saying “some hypotheses are just too extraordinary” is misleading in the sense it attributes the problem to the hypotheses, while the more fundamental problem is the tendency of some types of people to be too gullible about some types of extreme claims.
Yes, academics largely train people to follow various standard procedures as social conventions, instead of getting people to really understand the reasons for those conventions. Apparently it is very hard to teach and test regarding the underlying reasons. That is the fact that really gives me pause.
To continue with this metaphor, it seems what we need is a good set of problems to test our rationality (i.e., ability to resist common biases). As with any cognitive test, the details of the test can’t be known in advance, or people would just memorize answers instead of developing skills. So we need to collect a rather large set of good test questions, or create a generators to create sets of them automatically. Not easy, but perhaps worth doing.