When Science Can’t Help

Once upon a time, a younger Eliezer had a stupid theory. Let’s say that Eliezer18′s stupid theory was that consciousness was caused by closed timelike curves hiding in quantum gravity. This isn’t the whole story, not even close, but it will do for a start.

And there came a point where I looked back, and realized:

  1. I had carefully followed everything I’d been told was Traditionally Rational, in the course of going astray. For example, I’d been careful to only believe in stupid theories that made novel experimental predictions, e.g., that neuronal microtubules would be found to support coherent quantum states.

  2. Science would have been perfectly fine with my spending ten years trying to test my stupid theory, only to get a negative experimental result, so long as I then said, “Oh, well, I guess my theory was wrong.”

From Science’s perspective, that is how things are supposed to work—happy fun for everyone. You admitted your error! Good for you! Isn’t that what Science is all about?

But what if I didn’t want to waste ten years?

Well… Science didn’t have much to say about that. How could Science say which theory was right, in advance of the experimental test? Science doesn’t care where your theory comes from—it just says, “Go test it.”

This is the great strength of Science, and also its great weakness.

Gray Area asked:

Eliezer, why are you concerned with untestable questions?

Because questions that are easily immediately tested are hard for Science to get wrong.

I mean, sure, when there’s already definite unmistakable experimental evidence available, go with it. Why on Earth wouldn’t you?

But sometimes a question will have very large, very definite experimental consequences in your future—but you can’t easily test it experimentally right now—and yet there is a strong rational argument.

Macroscopic quantum superpositions are readily testable: It would just take nanotechnologic precision, very low temperatures, and a nice clear area of interstellar space. Oh, sure, you can’t do it right now, because it’s too expensive or impossible for today’s technology or something like that—but in theory, sure! Why, maybe someday they’ll run whole civilizations on macroscopically superposed quantum computers, way out in a well-swept volume of a Great Void. (Asking what quantum non-realism says about the status of any observers inside these computers, helps to reveal the underspecification of quantum non-realism.)

This doesn’t seem immediately pragmatically relevant to your life, I’m guessing, but it establishes the pattern: Not everything with future consequences is cheap to test now.

Evolutionary psychology is another example of a case where rationality has to take over from science. While theories of evolutionary psychology form a connected whole, only some of those theories are readily testable experimentally. But you still need the other parts of the theory, because they form a connected web that helps you to form the hypotheses that are actually testable—and then the helper hypotheses are supported in a Bayesian sense, but not supported experimentally. Science would render a verdict of “not proven” on individual parts of a connected theoretical mesh that is experimentally productive as a whole. We’d need a new kind of verdict for that, something like “indirectly supported”.

Or what about cryonics?

Cryonics is an archetypal example of an extremely important issue (150,000 people die per day) that will have huge consequences in the foreseeable future, but doesn’t offer definite unmistakable experimental evidence that we can get right now.

So do you say, “I don’t believe in cryonics because it hasn’t been experimentally proven, and you shouldn’t believe in things that haven’t been experimentally proven?”

Well, from a Bayesian perspective, that’s incorrect. Absence of evidence is evidence of absence only to the degree that we could reasonably expect the evidence to appear. If someone is trumpeting that snake oil cures cancer, you can reasonably expect that, if the snake oil was actually curing cancer, some scientist would be performing a controlled study to verify it—that, at the least, doctors would be reporting case studies of amazing recoveries—and so the absence of this evidence is strong evidence of absence. But “gaps in the fossil record” are not strong evidence against evolution; fossils form only rarely, and even if an intermediate species did in fact exist, you cannot expect with high probability that Nature will obligingly fossilize it and that the fossil will be discovered.

Reviving a cryonically frozen mammal is just not something you’d expect to be able to do with modern technology, even if future nanotechnologies could in fact perform a successful revival. That’s how I see Bayes seeing it.

Oh, and as for the actual arguments for cryonics—I’m not going to go into those at the moment. But if you followed the physics and anti-Zombie sequences, it should now seem a lot more plausible, that whatever preserves the pattern of synapses, preserves as much of “you” as is preserved from one night’s sleep to morning’s waking.

Now, to be fair, someone who says, “I don’t believe in cryonics because it hasn’t been proven experimentally” is misapplying the rules of Science; this is not a case where science actually gives the wrong answer. In the absence of a definite experimental test, the verdict of science here is “Not proven”. Anyone who interprets that as a rejection is taking an extra step outside of science, not a misstep within science.

John McCarthy’s Wikiquotes page has him saying, “Your statements amount to saying that if AI is possible, it should be easy. Why is that?” The Wikiquotes page doesn’t say what McCarthy was responding to, but I could venture a guess.

The general mistake probably arises because there are cases where the absence of scientific proof is strong evidence—because an experiment would be readily performable, and so failure to perform it is itself suspicious. (Though not as suspicious as I used to think—with all the strangely varied anecdotal evidence coming in from respected sources, why the hell isn’t anyone testing Seth Roberts’s theory of appetite suppression?)

Another confusion factor may be that if you test Pharmaceutical X on 1000 subjects and find that 56% of the control group and 57% of the experimental group recover, some people will call that a verdict of “Not proven”. I would call it an experimental verdict of “Pharmaceutical X doesn’t work well, if at all”. Just because this verdict is theoretically retractable in the face of new evidence, doesn’t make it ambiguous.

In any case, right now you’ve got people dismissing cryonics out of hand as “not scientific”, like it was some kind of pharmaceutical you could easily administer to 1000 patients and see what happened. “Call me when cryonicists actually revive someone,” they say; which, as Mike Li observes, is like saying “I refuse to get into this ambulance; call me when it’s actually at the hospital”. Maybe Martin Gardner warned them against believing in strange things without experimental evidence. So they wait for the definite unmistakable verdict of Science, while their family and friends and 150,000 people per day are dying right now, and might or might not be savable—

—a calculated bet you could only make rationally.

The drive of Science is to obtain a mountain of evidence so huge that not even fallible human scientists can misread it. But even that sometimes goes wrong, when people become confused about which theory predicts what, or bake extremely-hard-to-test components into an early version of their theory. And sometimes you just can’t get clear experimental evidence at all.

Either way, you have to try to do the thing that Science doesn’t trust anyone to do—think rationally, and figure out the answer before you get clubbed over the head with it.

(Oh, and sometimes a disconfirming experimental result looks like: “Your entire species has just been wiped out! You are now scientifically required to relinquish your theory. If you publicly recant, good for you! Remember, it takes a strong mind to give up strongly held beliefs. Feel free to try another hypothesis next time!”)