Yes, a new paper confirms this.
The association between quality measures of medical university press releases and their corresponding news stories—Important information missing
Agreed; those are important considerations. In general, I think a risk for rationalists is to change one’s behaviour on complex and important matters based on individual arguments which, while they appear plausible, don’t give the full picture. Cf Chesterton’s fence, naive rationalism, etc.
This was already posted a few links down.
One interesting aspect of posts like this is that they can, to some extent, be (felicitously) self-defeating.
As Bastian Stern has pointed out to me, people often mix up pro tanto-considerations with all-things-considered-judgements—usually by interpreting what is merely intended to be a pro tanto-consideration as an all-things-considered judgement. Is there a name for this fallacy? It seems both dangerous and common so should have a name.
Thanks Ryan, that’s helpful. Yes, I’m not sure one would be able to do something that has the right combination of accuracy, interestingness and low-cost at present.
Sure, I guess my question was whether you’d think that it’d be possible to do this in a way that would resonate with readers. Would they find the estimates of quality, or level of postmodernism, intuitively plausible?
My hunch was that the classification would primarily be based on patterns of word use, but you’re right that it would probably be fruitful to use at patterns of citations.
Good points. I agree that what you write within parentheses is a potential problem. Indeed, it is a problem for many kinds of far-reaching norms on altruistic behaviour compliance with which is hard to observe: they might handicap conscientious people relative to less conscientious people to such an extent that the norms do more harm than good.
I also agree that individualistic solutions to collective problems have a chequered record. The point of 1)-3) was rather to indicate how you potentially could reduce hedge drift, given that you want to do that. To get scientists and others to want to reduce hedge drift is probably a harder problem.
In conversation, Ben Levinstein suggested that it is partly the editors’ role to frame articles in a way such that hedge drift doesn’t occur. There is something to that, though it is of course also true that editors often have incentives to encourage hedge drift as well.
Thanks. My claim is somewhat different, though. Adams says that “whenever humanity can see a slow-moving disaster coming, we find a way to avoid it”. This is an all-things-considered claim. My claim is rather that sleepwalk bias is a pro-tanto consideration indicating that we’re too pessimistic about future disasters (perhaps especially slow-moving ones). I’m not claiming that we never sleepwalk into a disaster. Indeed, there might be stronger countervailing considerations, which if true would mean that all things considered we are too optimistic about existential risk.
It is not quite clear to me whether you are here just talking about instances of sleepwalking, or whether you are also talking about a predictive error indicating anti-sleepwalking bias: i.e. that they wrongly predicted that the relevant actors would act, yet they sleepwalked into a disaster.
Also, my claim is not that sleepwalking never occurs, but that people on average seem to think that it happens more often than it actually does.
Open Phil gives $500,000 to Tetlock’s research.
Great post. Another issue is why B doesn’t believe Y in spite of believing X and in spite of A believing that X implies Y. Some mechanisms:
a) B rejects that X implies Y, for reasons that are good or bad, or somewhere in between. (Last case: reasonable disagreement.)
b) B hasn’t even considered whether X implies Y. (Is not logically omniscient.)
c) Y only follows from X given some additional premises Z, which B either rejects (for reasons that are good or bad or somehwere in between) or hasn’t entertained. (What Tyrrell McAllister wrote.)
d) B is confused over the meaning of X, and hence is confused over what X implies. (The dialect case.)
Thanks a lot! Yes, super-useful.