Science Isn’t Strict Enough

Once upon a time, a younger Eliezer had a stupid the­ory. Eliezer18 was care­ful to fol­low the pre­cepts of Tra­di­tional Ra­tion­al­ity that he had been taught; he made sure his stupid the­ory had ex­per­i­men­tal con­se­quences. Eliezer18 pro­fessed, in ac­cor­dance with the virtues of a sci­en­tist he had been taught, that he wished to test his stupid the­ory.

This was all that was re­quired to be vir­tu­ous, ac­cord­ing to what Eliezer18 had been taught was virtue in the way of sci­ence.

It was not even re­motely the or­der of effort that would have been re­quired to get it right.

The tra­di­tional ideals of Science too read­ily give out gold stars. Nega­tive ex­per­i­men­tal re­sults are also knowl­edge, so ev­ery­one who plays gets an award. So long as you can think of some kind of ex­per­i­ment that tests your the­ory, and you do the ex­per­i­ment, and you ac­cept the re­sults, you’ve played by the rules; you’re a good sci­en­tist.

You didn’t nec­es­sar­ily get it right, but you’re a nice sci­ence-abid­ing cit­i­zen.

(I note at this point that I am speak­ing of Science, not the so­cial pro­cess of sci­ence as it ac­tu­ally works in prac­tice, for two rea­sons. First, I went astray in try­ing to fol­low the ideal of Science—it’s not like I was shot down by a jour­nal ed­i­tor with a grudge, and it’s not like I was try­ing to imi­tate the flaws of academia. Se­cond, if I point out a prob­lem with the ideal as it is tra­di­tion­ally preached, real-world sci­en­tists are not forced to like­wise go astray!)

Science be­gan as a re­bel­lion against grand philo­soph­i­cal schemas and arm­chair rea­son­ing. So Science doesn’t in­clude a rule as to what kinds of hy­pothe­ses you are and aren’t al­lowed to test; that is left up to the in­di­vi­d­ual sci­en­tist. Try­ing to guess that a pri­ori, would re­quire some kind of grand philo­soph­i­cal schema, and rea­son­ing in ad­vance of the ev­i­dence. As a so­cial ideal, Science doesn’t judge you as a bad per­son for com­ing up with hereti­cal hy­pothe­ses; hon­est ex­per­i­ments, and ac­cep­tance of the re­sults, is virtue unto a sci­en­tist.

As long as most sci­en­tists can man­age to ac­cept definite, un­mis­tak­able, un­am­bigu­ous ex­per­i­men­tal ev­i­dence, sci­ence can progress. It may hap­pen too slowly—it may take longer than it should—you may have to wait for a gen­er­a­tion of el­ders to die out—but even­tu­ally, the ratchet of knowl­edge clicks for­ward an­other notch. Year by year, decade by decade, the wheel turns for­ward. It’s enough to sup­port a civ­i­liza­tion.

So that’s all that Science re­ally asks of you—the abil­ity to ac­cept re­al­ity when you’re beat over the head with it. It’s not much, but it’s enough to sus­tain a sci­en­tific cul­ture.

Con­trast this to the no­tion we have in prob­a­bil­ity the­ory, of an ex­act quan­ti­ta­tive ra­tio­nal judg­ment. If 1% of women pre­sent­ing for a rou­tine screen­ing have breast can­cer, and 80% of women with breast can­cer get pos­i­tive mam­mo­gra­phies, and 10% of women with­out breast can­cer get false pos­i­tives, what is the prob­a­bil­ity that a rou­tinely screened woman with a pos­i­tive mam­mog­ra­phy has breast can­cer? 7.5%. You can­not say, “I be­lieve she doesn’t have breast can­cer, be­cause the ex­per­i­ment isn’t definite enough.” You can­not say, “I be­lieve she has breast can­cer, be­cause it is wise to be pes­simistic and that is what the only ex­per­i­ment so far seems to in­di­cate.” 7.5% is the ra­tio­nal es­ti­mate given this ev­i­dence, not 7.4% or 7.6%. The laws of prob­a­bil­ity are laws.

It is writ­ten in the Twelve Virtues, of the third virtue, light­ness:

If you re­gard ev­i­dence as a con­straint and seek to free your­self, you sell your­self into the chains of your whims. For you can­not make a true map of a city by sit­ting in your bed­room with your eyes shut and draw­ing lines upon pa­per ac­cord­ing to im­pulse. You must walk through the city and draw lines on pa­per that cor­re­spond to what you see. If, see­ing the city un­clearly, you think that you can shift a line just a lit­tle to the right, just a lit­tle to the left, ac­cord­ing to your caprice, this is just the same mis­take.

In Science, when it comes to de­cid­ing which hy­pothe­ses to test, the moral­ity of Science gives you per­sonal free­dom of what to be­lieve, so long as it isn’t already ruled out by ex­per­i­ment, and so long as you move to test your hy­poth­e­sis. Science wouldn’t try to give an offi­cial ver­dict on the best hy­poth­e­sis to test, in ad­vance of the ex­per­i­ment. That’s left up to the con­science of the in­di­vi­d­ual sci­en­tist.

Where definite ex­per­i­men­tal ev­i­dence ex­ists, Science tells you to bow your stub­born neck and ac­cept it. Other­wise, Science leaves it up to you. Science gives you room to wan­der around within the bound­aries of the ex­per­i­men­tal ev­i­dence, ac­cord­ing to your whims.

And this is not eas­ily rec­on­ciled with Bayesi­anism’s no­tion of an ex­actly right prob­a­bil­ity es­ti­mate, one with no flex or room for whims, that ex­ists both be­fore and af­ter the ex­per­i­ment. It doesn’t match well with the an­cient and tra­di­tional rea­son for Science—the dis­trust of grand schemas, the pre­sump­tion that peo­ple aren’t ra­tio­nal enough to get things right with­out definite and un­mis­tak­able ex­per­i­men­tal ev­i­dence. If we were all perfect Bayesi­ans, we wouldn’t need a so­cial pro­cess of sci­ence.

Nonethe­less, around the time I re­al­ized my big mis­take, I had also been study­ing Kah­ne­man and Tver­sky and Jaynes. I was learn­ing a new Way, stric­ter than Science. A Way that could crit­i­cize my folly, in a way that Science never could. A Way that could have told me, what Science would never have said in ad­vance: “You picked the wrong hy­poth­e­sis to test, dun­der­head.”

But the Way of Bayes is also much harder to use than Science. It puts a tremen­dous strain on your abil­ity to hear tiny false notes, where Science only de­mands that you no­tice an anvil dropped on your head.

In Science you can make a mis­take or two, and an­other ex­per­i­ment will come by and cor­rect you; at worst you waste a cou­ple of decades.

But if you try to use Bayes even qual­i­ta­tively—if you try to do the thing that Science doesn’t trust you to do, and rea­son ra­tio­nally in the ab­sence of over­whelming ev­i­dence—it is like math, in that a sin­gle er­ror in a hun­dred steps can carry you any­where. It de­mands light­ness, even­ness, pre­ci­sion, perfec­tion­ism.

There’s a good rea­son why Science doesn’t trust sci­en­tists to do this sort of thing, and asks for fur­ther ex­per­i­men­tal proof even af­ter some­one claims they’ve worked out the right an­swer based on hints and logic.

But if you would rather not waste ten years try­ing to prove the wrong the­ory, you’ll need to es­say the vastly more difficult prob­lem: listen­ing to ev­i­dence that doesn’t shout in your ear.

(For the benefit of those in the au­di­ence who have not been fol­low­ing along this whole time: Even if you can’t look up the pri­ors for a prob­lem in the Hand­book of Chem­istry and Physics—even if there’s no Author­i­ta­tive Source tel­ling you what the pri­ors are—that doesn’t mean you get a free, per­sonal choice of mak­ing the pri­ors what­ever you want. It means you have a new guess­ing prob­lem which you must carry out to the best of your abil­ity.

If the mind, as a cog­ni­tive en­g­ine, could gen­er­ate cor­rect es­ti­mates by fid­dling with pri­ors ac­cord­ing to whims, you could know things with­out look­ing them, or even al­ter them with­out touch­ing them. But the mind is not magic. The ra­tio­nal prob­a­bil­ity es­ti­mate has no room for any de­ci­sion based on whim, even when it seems that you don’t know the pri­ors.

Similarly, if the Bayesian an­swer is difficult to com­pute, that doesn’t mean that Bayes is in­ap­pli­ca­ble; it means you don’t know what the Bayesian an­swer is. Bayesian prob­a­bil­ity the­ory is not a toolbox of statis­ti­cal meth­ods, it’s the law that gov­erns any tool you use, whether or not you know it, whether or not you can calcu­late it.

As for us­ing Bayesian meth­ods on huge, highly gen­eral hy­poth­e­sis spaces—like, “Here’s the data from ev­ery physics ex­per­i­ment ever; now, what would be a good The­ory of Every­thing?”—if you knew how to do that in prac­tice, you wouldn’t be a statis­ti­cian, you would be an Ar­tifi­cial Gen­eral In­tel­li­gence pro­gram­mer. But that doesn’t mean that hu­man be­ings, in mod­el­ing the uni­verse us­ing hu­man in­tel­li­gence, are vi­o­lat­ing the laws of physics /​ Bayesi­anism by gen­er­at­ing cor­rect guesses with­out ev­i­dence.)

Added: Nick Tar­leton says:

The prob­lem is en­courag­ing a pri­vate, epistemic stan­dard as lax as the so­cial one.

which pin­points the prob­lem I was try­ing to in­di­cate much bet­ter than I did.