Do Scientists Already Know This Stuff?

poke alleges:

“Being able to create relevant hypotheses is an important skill and one a scientist spends a great deal of his or her time developing. It may not be part of the traditional description of science but that doesn’t mean it’s not included in the actual social institution of science that produces actual real science here in the real world; it’s your description and not science that is faulty.”

I know I’ve been calling my younger self “stupid” but that is a figure of speech; “unskillfully wielding high intelligence” would be more precise. Eliezer18 was not in the habit of making obvious mistakes—it’s just that his “obvious” wasn’t my “obvious”.

No, I did not go through the traditional apprenticeship. But when I look back, and see what Eliezer18 did wrong, I see plenty of modern scientists making the same mistakes. I cannot detect any sign that they were better warned than myself.

Sir Roger Penrose—a world-class physicist—still thinks that consciousness is caused by quantum gravity. I expect that no one ever warned him against mysterious answers to mysterious questions—only told him his hypotheses needed to be falsifiable and have empirical consequences. Just like Eliezer18.

“Consciousness is caused by quantum gravity” has testable implications: It implies that you should be able to look at neurons and discover a coherent quantum superposition (whose collapse?) contributes to information-processing, and that you won’t ever be able to reproduce a neuron’s input-output behavior using a computable microanatomical simulation...

...but even after you say “Consciousness is caused by quantum gravity”, you don’t anticipate anything about how your brain thinks “I think therefore I am!” or the mysterious redness of red, that you did not anticipate before, even though you feel like you know a cause of it. This is a tremendous danger sign, I now realize, but it’s not the danger sign that I was warned against, and I doubt that Penrose was ever told of it by his thesis advisor. For that matter, I doubt that Niels Bohr was ever warned against it when it came time to formulate the Copenhagen Interpretation.

As far as I can tell, the reason Eliezer18 and Sir Roger Penrose and Niels Bohr were not warned, is that no standard warning exists.

I did not generalize the concept of “mysterious answers to mysterious questions”, in that many words, until I was writing a Bayesian analysis of what distinguishes technical, nontechnical and semitechnical scientific explanations. Now, the final output of that analysis, can be phrased nontechnically in terms of four danger signs:

  • First, the explanation acts as a curiosity-stopper rather than an anticipation-controller.

  • Second, the hypothesis has no moving parts—the secret sauce is not a specific complex mechanism, but a blankly solid substance or force.

  • Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena.

  • Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of wonderful inexplicability that it had at the start.

In principle, all this could have been said in the immediate aftermath of vitalism. Just like elementary probability theory could have been invented by Archimedes, or the ancient Greeks could have theorized natural selection. But in fact no one ever warned me against any of these four dangers, in those terms—the closest being the warning that hypotheses should have testable consequences. And I didn’t conceptualize the warning signs explicitly until I was trying to think of the whole affair in terms of probability distributions—some degree of overkill was required.

I simply have no reason to believe that these warnings are passed down in scientific apprenticeships—certainly not to a majority of scientists. Among other things, it is advice for handling situations of confusion and despair, scientific chaos. When would the average scientist or average mentor have an opportunity to use that kind of technique?

We just got through discussing the single-world fiasco in physics. Clearly, no one told them about the formal definition of Occam’s Razor, in whispered apprenticeship or otherwise.

There is a known effect where great scientists have multiple great students. This may well be due to the mentors passing on skills that they can’t describe. But I don’t think that counts as part of standard science. And if the great mentors haven’t been able to put their guidance into words and publish it generally, that’s not a good sign for how well these things are understood.

Reasoning in the absence of definite evidence without going instantaneously completely wrong is really really hard. When you’re learning in school, you can miss one point, and then be taught fifty other points that happen to be correct. When you’re reasoning out new knowledge in the absence of crushingly overwhelming guidance, you can miss one point and wake up in Outer Mongolia fifty steps later.

I am pretty sure that scientists who switch off their brains and relax with some comfortable nonsense as soon as they leave their own specialties, do not realize that minds are engines and that there is a causal story behind every trustworthy belief. Nor, I suspect, were they ever told that there is an exact rational probability given a state of evidence, which has no room for whims; even if you can’t calculate the answer, and even if you don’t hear any authoritative command for what to believe.

I doubt that scientists who are asked to pontificate on the future by the media, who sketch amazingly detailed pictures of Life in 2050, were ever taught about the conjunction fallacy. Or how the representativeness heuristic can make more detailed stories seem more plausible, even as each extra detail drags down the probability. The notion of every added detail needing its own support—of not being able to make up big detailed stories that sound just like the detailed stories you were taught in science or history class—is absolutely vital to precise thinking in the absence of definite evidence. But how would a notion like that get into the standard scientific apprenticeship? The cognitive bias was uncovered only a few decades ago, and not popularized until very recently.

Then there’s affective death spirals around notions like “emergence” or “complexity” which are sufficiently vaguely defined that you can say lots of nice things about them. There’s whole academic subfields built around the kind of mistakes that Eliezer18 used to make! (Though I never fell for the “emergence” thing.)

I sometimes say that the goal of science is to amass such an enormous mountain of evidence that not even scientists can ignore it: and that this is the distinguishing feature of a scientist, a non-scientist will ignore it anyway.

If there can exist some amount of evidence so crushing that you finally despair, stop making excuses and just give up—drop the old theory and never mention it again—then this is all it takes to let the ratchet of Science turn forward over time, and raise up a technological civilization. Contrast to religion.

Books by Carl Sagan and Martin Gardner and the other veins of Traditional Rationality are meant to accomplish this difference: to transform someone from a non-scientist into a potential scientist, and guard them from experimentally disproven madness.

What further training does a professional scientist get? Some frequentist stats classes on how to calculate statistical significance. Training in standard techniques that will let them churn out papers within a solidly established paradigm.

If Science demanded more than this from the average scientist, I don’t think it would be possible for Science to get done. We have problems enough from people who sneak in without the drop-dead-basic qualifications.

Nick Tarleton summarized the resulting problem very well—better than I did, in fact: If you come up with a bizarre-seeming hypothesis not yet ruled out by the evidence, and try to test it experimentally, Science doesn’t call you a bad person. Science doesn’t trust its elders to decide which hypotheses “aren’t worth testing”. But this is a carefully lax social standard, and if you try to translate it into a standard of individual epistemic rationality, it lets you believe far too much. Dropping back into the analogy with pragmatic-distrust-based-libertarianism, it’s the difference between “Cigarettes shouldn’t be illegal” and “Go smoke a Marlboro”.

Do you remember ever being warned against that mistake, in so many words? Then why wouldn’t people make exactly that error? How many people will spontaneously go an extra mile and be even stricter with themselves? Some, but not many.

Many scientists will believe all manner of ridiculous things outside the laboratory, so long as they can convince themselves it hasn’t been definitely disproven, or so long as they manage not to ask. Is there some standard lecture that grad students get, of which people see this folly, and ask, “Were they absent from class that day?” No, as far as I can tell.

Maybe if you’re super lucky and get a famous mentor, they’ll tell you rare personal secrets like “Ask yourself which are the important problems in your field, and then work on one of those, instead of falling into something easy and trivial” or “Be more careful than the journal editors demand; look for new ways to guard your expectations from influencing the experiment, even if it’s not standard.”

But I really don’t think there’s a huge secret standard scientific tradition of precision-grade rational reasoning on sparse evidence. Half of all the scientists out there still believe they believe in God! The more difficult skills are not standard!