When Science Can’t Help

Once upon a time, a younger Eliezer had a stupid the­ory. Let’s say that Eliezer18′s stupid the­ory was that con­scious­ness was caused by closed timelike curves hid­ing in quan­tum grav­ity. This isn’t the whole story, not even close, but it will do for a start.

And there came a point where I looked back, and re­al­ized:

  1. I had care­fully fol­lowed ev­ery­thing I’d been told was Tra­di­tion­ally Ra­tional, in the course of go­ing astray. For ex­am­ple, I’d been care­ful to only be­lieve in stupid the­o­ries that made novel ex­per­i­men­tal pre­dic­tions, e.g., that neu­ronal micro­tubules would be found to sup­port co­her­ent quan­tum states.

  2. Science would have been perfectly fine with my spend­ing ten years try­ing to test my stupid the­ory, only to get a nega­tive ex­per­i­men­tal re­sult, so long as I then said, “Oh, well, I guess my the­ory was wrong.”

From Science’s per­spec­tive, that is how things are sup­posed to work—happy fun for ev­ery­one. You ad­mit­ted your er­ror! Good for you! Isn’t that what Science is all about?

But what if I didn’t want to waste ten years?

Well… Science didn’t have much to say about that. How could Science say which the­ory was right, in ad­vance of the ex­per­i­men­tal test? Science doesn’t care where your the­ory comes from—it just says, “Go test it.”

This is the great strength of Science, and also its great weak­ness.

Gray Area asked:

Eliezer, why are you con­cerned with untestable ques­tions?

Be­cause ques­tions that are eas­ily im­me­di­ately tested are hard for Science to get wrong.

I mean, sure, when there’s already definite un­mis­tak­able ex­per­i­men­tal ev­i­dence available, go with it. Why on Earth wouldn’t you?

But some­times a ques­tion will have very large, very definite ex­per­i­men­tal con­se­quences in your fu­ture—but you can’t eas­ily test it ex­per­i­men­tally right now—and yet there is a strong ra­tio­nal ar­gu­ment.

Macro­scopic quan­tum su­per­po­si­tions are read­ily testable: It would just take nan­otech­nologic pre­ci­sion, very low tem­per­a­tures, and a nice clear area of in­ter­stel­lar space. Oh, sure, you can’t do it right now, be­cause it’s too ex­pen­sive or im­pos­si­ble for to­day’s tech­nol­ogy or some­thing like that—but in the­ory, sure! Why, maybe some­day they’ll run whole civ­i­liza­tions on macro­scop­i­cally su­per­posed quan­tum com­put­ers, way out in a well-swept vol­ume of a Great Void. (Ask­ing what quan­tum non-re­al­ism says about the sta­tus of any ob­servers in­side these com­put­ers, helps to re­veal the un­der­speci­fi­ca­tion of quan­tum non-re­al­ism.)

This doesn’t seem im­me­di­ately prag­mat­i­cally rele­vant to your life, I’m guess­ing, but it es­tab­lishes the pat­tern: Not ev­ery­thing with fu­ture con­se­quences is cheap to test now.

Evolu­tion­ary psy­chol­ogy is an­other ex­am­ple of a case where ra­tio­nal­ity has to take over from sci­ence. While the­o­ries of evolu­tion­ary psy­chol­ogy form a con­nected whole, only some of those the­o­ries are read­ily testable ex­per­i­men­tally. But you still need the other parts of the the­ory, be­cause they form a con­nected web that helps you to form the hy­pothe­ses that are ac­tu­ally testable—and then the helper hy­pothe­ses are sup­ported in a Bayesian sense, but not sup­ported ex­per­i­men­tally. Science would ren­der a ver­dict of “not proven” on in­di­vi­d­ual parts of a con­nected the­o­ret­i­cal mesh that is ex­per­i­men­tally pro­duc­tive as a whole. We’d need a new kind of ver­dict for that, some­thing like “in­di­rectly sup­ported”.

Or what about cry­on­ics?

Cry­on­ics is an archety­pal ex­am­ple of an ex­tremely im­por­tant is­sue (150,000 peo­ple die per day) that will have huge con­se­quences in the fore­see­able fu­ture, but doesn’t offer definite un­mis­tak­able ex­per­i­men­tal ev­i­dence that we can get right now.

So do you say, “I don’t be­lieve in cry­on­ics be­cause it hasn’t been ex­per­i­men­tally proven, and you shouldn’t be­lieve in things that haven’t been ex­per­i­men­tally proven?”

Well, from a Bayesian per­spec­tive, that’s in­cor­rect. Ab­sence of ev­i­dence is ev­i­dence of ab­sence only to the de­gree that we could rea­son­ably ex­pect the ev­i­dence to ap­pear. If some­one is trum­pet­ing that snake oil cures can­cer, you can rea­son­ably ex­pect that, if the snake oil was ac­tu­ally cur­ing can­cer, some sci­en­tist would be perform­ing a con­trol­led study to ver­ify it—that, at the least, doc­tors would be re­port­ing case stud­ies of amaz­ing re­cov­er­ies—and so the ab­sence of this ev­i­dence is strong ev­i­dence of ab­sence. But “gaps in the fos­sil record” are not strong ev­i­dence against evolu­tion; fos­sils form only rarely, and even if an in­ter­me­di­ate species did in fact ex­ist, you can­not ex­pect with high prob­a­bil­ity that Na­ture will obligingly fos­silize it and that the fos­sil will be dis­cov­ered.

Re­viv­ing a cry­on­i­cally frozen mam­mal is just not some­thing you’d ex­pect to be able to do with mod­ern tech­nol­ogy, even if fu­ture nan­otech­nolo­gies could in fact perform a suc­cess­ful re­vival. That’s how I see Bayes see­ing it.

Oh, and as for the ac­tual ar­gu­ments for cry­on­ics—I’m not go­ing to go into those at the mo­ment. But if you fol­lowed the physics and anti-Zom­bie se­quences, it should now seem a lot more plau­si­ble, that what­ever pre­serves the pat­tern of synapses, pre­serves as much of “you” as is pre­served from one night’s sleep to morn­ing’s wak­ing.

Now, to be fair, some­one who says, “I don’t be­lieve in cry­on­ics be­cause it hasn’t been proven ex­per­i­men­tally” is mis­ap­ply­ing the rules of Science; this is not a case where sci­ence ac­tu­ally gives the wrong an­swer. In the ab­sence of a definite ex­per­i­men­tal test, the ver­dict of sci­ence here is “Not proven”. Any­one who in­ter­prets that as a re­jec­tion is tak­ing an ex­tra step out­side of sci­ence, not a mis­step within sci­ence.

John McCarthy’s Wik­iquotes page has him say­ing, “Your state­ments amount to say­ing that if AI is pos­si­ble, it should be easy. Why is that?” The Wik­iquotes page doesn’t say what McCarthy was re­spond­ing to, but I could ven­ture a guess.

The gen­eral mis­take prob­a­bly arises be­cause there are cases where the ab­sence of sci­en­tific proof is strong ev­i­dence—be­cause an ex­per­i­ment would be read­ily performable, and so failure to perform it is it­self sus­pi­cious. (Though not as sus­pi­cious as I used to think—with all the strangely varied anec­do­tal ev­i­dence com­ing in from re­spected sources, why the hell isn’t any­one test­ing Seth Roberts’s the­ory of ap­petite sup­pres­sion?)

Another con­fu­sion fac­tor may be that if you test Phar­ma­ceu­ti­cal X on 1000 sub­jects and find that 56% of the con­trol group and 57% of the ex­per­i­men­tal group re­cover, some peo­ple will call that a ver­dict of “Not proven”. I would call it an ex­per­i­men­tal ver­dict of “Phar­ma­ceu­ti­cal X doesn’t work well, if at all”. Just be­cause this ver­dict is the­o­ret­i­cally re­tractable in the face of new ev­i­dence, doesn’t make it am­bigu­ous.

In any case, right now you’ve got peo­ple dis­miss­ing cry­on­ics out of hand as “not sci­en­tific”, like it was some kind of phar­ma­ceu­ti­cal you could eas­ily ad­minister to 1000 pa­tients and see what hap­pened. “Call me when cry­on­i­cists ac­tu­ally re­vive some­one,” they say; which, as Mike Li ob­serves, is like say­ing “I re­fuse to get into this am­bu­lance; call me when it’s ac­tu­ally at the hos­pi­tal”. Maybe Martin Gard­ner warned them against be­liev­ing in strange things with­out ex­per­i­men­tal ev­i­dence. So they wait for the definite un­mis­tak­able ver­dict of Science, while their fam­ily and friends and 150,000 peo­ple per day are dy­ing right now, and might or might not be sav­able—

—a calcu­lated bet you could only make ra­tio­nally.

The drive of Science is to ob­tain a moun­tain of ev­i­dence so huge that not even fal­lible hu­man sci­en­tists can mis­read it. But even that some­times goes wrong, when peo­ple be­come con­fused about which the­ory pre­dicts what, or bake ex­tremely-hard-to-test com­po­nents into an early ver­sion of their the­ory. And some­times you just can’t get clear ex­per­i­men­tal ev­i­dence at all.

Either way, you have to try to do the thing that Science doesn’t trust any­one to do—think ra­tio­nally, and figure out the an­swer be­fore you get clubbed over the head with it.

(Oh, and some­times a dis­con­firm­ing ex­per­i­men­tal re­sult looks like: “Your en­tire species has just been wiped out! You are now sci­en­tifi­cally re­quired to re­lin­quish your the­ory. If you pub­li­cly re­cant, good for you! Re­mem­ber, it takes a strong mind to give up strongly held be­liefs. Feel free to try an­other hy­poth­e­sis next time!”)