But Have They Engaged With The Arguments? [Linkpost]

Link post

There’s an interestingly pernicious version of a selection effect that occurs in epistemology, where people can be led into false claims because when non-believers try to engage with arguments, the unconvinced will drop out at random steps, and past a few steps or so, the believers/​evangelists who believe in all the arguments will have a secure-feeling position that the arguments are right, and that people who object to the arguments are (insane/​ridiculous/​obviously trolling), no matter whether the claim is true:


What’s going wrong, I think, is something like this. People encounter uncommonly-believed propositions now and then, like “AI safety research is the most valuable use of philanthropic money and talent in the world” or “Sikhism is true”, and decide whether or not to investigate them further. If they decide to hear out a first round of arguments but don’t find them compelling enough, they drop out of the process. (Let’s say that how compelling an argument seems is its “true strength” plus some random, mean-zero error.) If they do find the arguments compelling enough, they consider further investigation worth their time. They then tell the evangelist (or search engine or whatever) why they still object to the claim, and the evangelist (or whatever) brings a second round of arguments in reply. The process repeats.

As should be clear, this process can, after a few iterations, produce a situation in which most of those who have engaged with the arguments for a claim beyond some depth believe in it. But this is just because of the filtering mechanism: the deeper arguments were only ever exposed to people who were already, coincidentally, persuaded by the initial arguments. If people were chosen at random and forced to hear out all the arguments, most would not be persuaded.

Perhaps more disturbingly, if the case for the claim in question is presented as a long fuzzy inference, with each step seeming plausible on its own, individuals will drop out of the process by rejecting the argument at random steps, each of which most observers would accept. Believers will then be in the extremely secure-feeling position of knowing not only that most people who engage with the arguments are believers, but even that, for any particular skeptic, her particular reason for skepticism seems false to almost everyone who knows its counterargument.

In particular, if we combine this with a heavy tailed distribution of performance at fields, where people have exponential-drop off in intelligence, meaning that a few people matter a lot more in progress than most people, it means that it’s very difficult to distinguish cases where the small/​insular group arguing for something extreme relative to their current distribution is correct and everyone else doesn’t get the arguments/​data, and the cases where the small group is being fooled by a selection effect and the conclusion is actually false.

I’ll just quote it in full, since there’s no other better way to summarize this/​link to it:

Yeah. In science the association with things like scientific output, prizes, things like that, there’s a strong correlation and it seems like an exponential effect. It’s not a binary drop-off. There would be levels at which people cannot learn the relevant fields, they can’t keep the skills in mind faster than they forget them. It’s not a divide where there’s Einstein and the group that is 10 times as populous as that just can’t do it. Or the group that’s 100 times as populous as that suddenly can’t do it. The ability to do the things earlier with less evidence and such falls off at a faster rate in Mathematics and theoretical Physics and such than in most fields.

Yes, people would have discovered general relativity just from the overwhelming data and other people would have done it after Einstein.

No, that intuition is not necessarily correct. Machine learning certainly is an area that rewards ability but it’s also a field where empirics and engineering have been enormously influential. If you’re drawing the correlations compared to theoretical physics and pure mathematics, I think you’ll find a lower correlation with cognitive ability.

There’s obviously implications for our belief in AI risk/​AI power in general, but this is pretty applicable to a lot of fields, and probably explains at least some of the skepticism lots of people have towards groups that make weird/​surprising/​extreme claims (relative to their world model).