Faster Than Science

I some­times say that the method of sci­ence is to amass such an enor­mous moun­tain of ev­i­dence that even sci­en­tists can­not ig­nore it; and that this is the dis­t­in­guish­ing char­ac­ter­is­tic of a sci­en­tist, a non-sci­en­tist will ig­nore it any­way.

Max Planck was even less op­ti­mistic:

“A new sci­en­tific truth does not triumph by con­vinc­ing its op­po­nents and mak­ing them see the light, but rather be­cause its op­po­nents even­tu­ally die, and a new gen­er­a­tion grows up that is fa­mil­iar with it.”

I am much tick­led by this no­tion, be­cause it im­plies that the power of sci­ence to dis­t­in­guish truth from false­hood ul­ti­mately rests on the good taste of grad stu­dents.

The grad­ual in­crease in ac­cep­tance of many-wor­lds in aca­demic physics, sug­gests that there are physi­cists who will only ac­cept a new idea given some com­bi­na­tion of epistemic jus­tifi­ca­tion, and a suffi­ciently large aca­demic pack in whose com­pany they can be com­fortable. As more physi­cists ac­cept, the pack grows larger, and hence more peo­ple go over their in­di­vi­d­ual thresh­olds for con­ver­sion—with the epistemic jus­tifi­ca­tion re­main­ing es­sen­tially the same.

But Science still gets there even­tu­ally, and this is suffi­cient for the ratchet of Science to move for­ward, and raise up a tech­nolog­i­cal civ­i­liza­tion.

Scien­tists can be moved by ground­less prej­u­dices, by un­der­mined in­tu­itions, by raw herd be­hav­ior—the panoply of hu­man flaws. Each time a sci­en­tist shifts be­lief for epistem­i­cally un­jus­tifi­able rea­sons, it re­quires more ev­i­dence, or new ar­gu­ments, to can­cel out the noise.

The “col­lapse of the wave­func­tion” has no ex­per­i­men­tal jus­tifi­ca­tion, but it ap­peals to the (un­der­mined) in­tu­ition of a sin­gle world. Then it may take an ex­tra ar­gu­ment—say, that col­lapse vi­o­lates Spe­cial Rel­a­tivity—to be­gin the slow aca­demic dis­in­te­gra­tion of an idea that should never have been as­signed non-neg­ligible prob­a­bil­ity in the first place.

From a Bayesian per­spec­tive, hu­man aca­demic sci­ence as a whole is a highly in­effi­cient pro­ces­sor of ev­i­dence. Each time an un­jus­tifi­able ar­gu­ment shifts be­lief, you need an ex­tra jus­tifi­able ar­gu­ment to shift it back. The so­cial pro­cess of sci­ence leans on ex­tra ev­i­dence to over­come cog­ni­tive noise.

A more char­i­ta­ble way of putting it is that sci­en­tists will adopt po­si­tions that are the­o­ret­i­cally in­suffi­ciently ex­treme, com­pared to the ideal po­si­tions that sci­en­tists would adopt, if they were Bayesian AIs and could trust them­selves to rea­son clearly.

But don’t be too char­i­ta­ble. The noise we are talk­ing about is not all in­no­cent mis­takes. In many fields, de­bates drag on for decades af­ter they should have been set­tled. And not be­cause the sci­en­tists on both sides re­fuse to trust them­selves and agree they should look for ad­di­tional ev­i­dence. But be­cause one side keeps throw­ing up more and more ridicu­lous ob­jec­tions, and de­mand­ing more and more ev­i­dence, from an en­trenched po­si­tion of aca­demic power, long af­ter it be­comes clear from which quar­ter the winds of ev­i­dence are blow­ing. (I’m think­ing here about the de­bates sur­round­ing the in­ven­tion of evolu­tion­ary psy­chol­ogy, not about many-wor­lds.)

Is it pos­si­ble for in­di­vi­d­ual hu­mans or groups to pro­cess ev­i­dence more effi­ciently—reach cor­rect con­clu­sions faster—than hu­man aca­demic sci­ence as a whole?

“Ideas are tested by ex­per­i­ment. That is the core of sci­ence.” And this must be true, be­cause if you can’t trust Zom­bie Feyn­man, who can you trust?

Yet where do the ideas come from?

You may be tempted to re­ply, “They come from sci­en­tists. Got any other ques­tions?” In Science you’re not sup­posed to care where the hy­pothe­ses come from—just whether they pass or fail ex­per­i­men­tally.

Okay, but if you re­move all new ideas, the sci­en­tific pro­cess as a whole stops work­ing be­cause it has no al­ter­na­tive hy­pothe­ses to test. So in­vent­ing new ideas is not a dis­pens­able part of the pro­cess.

Now put your Bayesian gog­gles back on. As de­scribed in Ein­stein’s Ar­ro­gance, there are queries that are not bi­nary—where the an­swer is not “Yes” or “No”, but drawn from a larger space of struc­tures, e.g., the space of equa­tions. In such cases it takes far more Bayesian ev­i­dence to pro­mote a hy­poth­e­sis to your at­ten­tion than to con­firm the hy­poth­e­sis.

If you’re work­ing in the space of all equa­tions that can be speci­fied in 32 bits or less, you’re work­ing in a space of 4 billion equa­tions. It takes far more Bayesian ev­i­dence to raise one of those hy­pothe­ses to the 10% prob­a­bil­ity level, than it re­quires fur­ther Bayesian ev­i­dence to raise the hy­poth­e­sis from 10% to 90% prob­a­bil­ity.

When the idea-space is large, com­ing up with ideas wor­thy of test­ing, in­volves much more work—in the Bayesian-ther­mo­dy­namic sense of “work”—than merely ob­tain­ing an ex­per­i­men­tal re­sult with p<0.0001 for the new hy­poth­e­sis over the old hy­poth­e­sis.

If this doesn’t seem ob­vi­ous-at-a-glance, pause here and read Ein­stein’s Ar­ro­gance.

The sci­en­tific pro­cess has always re­lied on sci­en­tists to come up with hy­pothe­ses to test, via some pro­cess not fur­ther speci­fied by Science. Sup­pose you came up with some way of gen­er­at­ing hy­pothe­ses that was com­pletely crazy—say, pump­ing a robot-con­trol­led Ouija board with the digits of pi—and the re­sult­ing sug­ges­tions kept on get­ting ver­ified ex­per­i­men­tally. The pure ideal essence of Science wouldn’t skip a beat. The pure ideal essence of Bayes would burst into flames and die.

(Com­pared to Science, Bayes is falsified by more of the pos­si­ble out­comes.)

This doesn’t mean that the pro­cess of de­cid­ing which ideas to test is unim­por­tant to Science. It means that Science doesn’t spec­ify it.

In prac­tice, the robot-con­trol­led Ouija board doesn’t work. In prac­tice, there are some sci­en­tific queries with a large enough an­swer space, that pick­ing mod­els at ran­dom to test, it would take zillions of years to hit on a model that made good pre­dic­tions—like get­ting mon­keys to type Shake­speare.

At the fron­tier of sci­ence—the bound­ary be­tween ig­no­rance and knowl­edge, where sci­ence ad­vances—the pro­cess re­lies on at least some in­di­vi­d­ual sci­en­tists (or work­ing groups) see­ing things that are not yet con­firmed by Science. That’s how they know which hy­pothe­ses to test, in ad­vance of the test it­self.

If you take your Bayesian gog­gles off, you can say, “Well, they don’t have to know, they just have to guess.” If you put your Bayesian gog­gles back on, you re­al­ize that “guess­ing” with 10% prob­a­bil­ity re­quires nearly as much epistemic work to have been suc­cess­fully performed, be­hind the scenes, as “guess­ing” with 80% prob­a­bil­ity—at least for large an­swer spaces.

The sci­en­tist may not know he has done this epistemic work suc­cess­fully, in ad­vance of the ex­per­i­ment; but he must, in fact, have done it suc­cess­fully! Other­wise he will not even think of the cor­rect hy­poth­e­sis. In large an­swer spaces, any­way.

So the sci­en­tist makes the novel pre­dic­tion, performs the ex­per­i­ment, pub­lishes the re­sult, and now Science knows it too. It is now part of the pub­li­cly ac­cessible knowl­edge of hu­mankind, that any­one can ver­ify for them­selves.

In be­tween was an in­ter­val where the sci­en­tist ra­tio­nally knew some­thing that the pub­lic so­cial pro­cess of sci­ence hadn’t yet con­firmed. And this is not a triv­ial in­ter­val, though it may be short; for it is where the fron­tier of sci­ence lies, the ad­vanc­ing bor­der.

All of this is more true for non-rou­tine sci­ence than for rou­tine sci­ence, be­cause it is a no­tion of large an­swer spaces where the an­swer is not “Yes” or “No” or drawn from a small set of ob­vi­ous al­ter­na­tives. It is much eas­ier to train peo­ple to test ideas, than to have good ideas to test.