Science Doesn’t Trust Your Rationality

Scott Aaron­son sug­gests that Many-Wor­lds and liber­tar­i­anism are similar in that they are both cases of bul­let-swal­low­ing, rather than bul­let-dodg­ing:

Liber­tar­i­anism and MWI are both are grand philo­soph­i­cal the­o­ries that start from premises that al­most all ed­u­cated peo­ple ac­cept (quan­tum me­chan­ics in the one case, Econ 101 in the other), and claim to reach con­clu­sions that most ed­u­cated peo­ple re­ject, or are at least puz­zled by (the ex­is­tence of par­allel uni­verses /​ the de­sir­a­bil­ity of elimi­nat­ing fire de­part­ments).

Now there’s an anal­ogy that would never have oc­curred to me.

I’ve pre­vi­ously ar­gued that Science re­jects Many-Wor­lds but Bayes ac­cepts it. (Here, “Science” is cap­i­tal­ized be­cause we are talk­ing about the ideal­ized form of Science, not just the ac­tual so­cial pro­cess of sci­ence.)

It fur­ther­more seems to me that there is a deep anal­ogy be­tween (small-‘l’) liber­tar­i­anism and Science:

  1. Both are based on a prag­matic dis­trust of rea­son­able-sound­ing ar­gu­ments.

  2. Both try to build sys­tems that are more trust­wor­thy than the peo­ple in them.

  3. Both ac­cept that peo­ple are flawed, and try to har­ness their flaws to power the sys­tem.

The core ar­gu­ment for liber­tar­i­anism is his­tor­i­cally mo­ti­vated dis­trust of lovely the­o­ries of “How much bet­ter so­ciety would be, if we just made a rule that said XYZ.” If that sort of trick ac­tu­ally worked, then more reg­u­la­tions would cor­re­late to higher eco­nomic growth as so­ciety moved from lo­cal to global op­tima. But when some per­son or in­ter­est group gets enough power to start do­ing ev­ery­thing they think is a good idea, his­tory says that what ac­tu­ally hap­pens is Revolu­tion­ary France or Soviet Rus­sia.

The plans that in lovely the­ory should have made ev­ery­one happy ever af­ter, don’t have the re­sults pre­dicted by rea­son­able-sound­ing ar­gu­ments. And power cor­rupts, and at­tracts the cor­rupt.

So you reg­u­late as lit­tle as pos­si­ble, be­cause you can’t trust the lovely the­o­ries and you can’t trust the peo­ple who im­ple­ment them.

You don’t shake your finger at peo­ple for be­ing self­ish. You try to build an effi­cient sys­tem of pro­duc­tion out of self­ish par­ti­ci­pants, by re­quiring trans­ac­tions to be vol­un­tary. So peo­ple are forced to play pos­i­tive-sum games, be­cause that’s how they get the other party to sign the con­tract. With vi­o­lence re­strained and con­tracts en­forced, in­di­vi­d­ual self­ish­ness can power a globally pro­duc­tive sys­tem.

Of course none of this works quite so well in prac­tice as in the­ory, and I’m not go­ing to go into mar­ket failures, com­mons prob­lems, etc. The core ar­gu­ment for liber­tar­i­anism is not that liber­tar­i­anism would work in a perfect world, but that it de­grades grace­fully into real life. Or rather, de­grades less awk­wardly than any other known eco­nomic prin­ci­ple. (Peo­ple who see Liber­tar­i­anism as the perfect solu­tion for perfect peo­ple, strike me as kinda miss­ing the point of the “prag­matic dis­trust” thing.)

Science first came to know it­self as a re­bel­lion against trust­ing the word of Aris­to­tle. If the peo­ple of that rev­olu­tion had merely said, “Let us trust our­selves, not Aris­to­tle!” they would have flashed and faded like the French Revolu­tion.

But the Scien­tific Revolu­tion lasted be­cause—like the Amer­i­can Revolu­tion—the ar­chi­tects pro­pounded a stranger philos­o­phy: “Let us trust no one! Not even our­selves!”

In the be­gin­ning came the idea that we can’t just toss out Aris­to­tle’s arm­chair rea­son­ing and re­place it with differ­ent arm­chair rea­son­ing. We need to talk to Na­ture, and ac­tu­ally listen to what It says in re­ply. This, it­self, was a stroke of ge­nius.

But then came the challenge of im­ple­men­ta­tion. Peo­ple are stub­born, and may not want to ac­cept the ver­dict of ex­per­i­ment. Shall we shake a dis­ap­prov­ing finger at them, and say “Naughty”?

No; we as­sume and ac­cept that each in­di­vi­d­ual sci­en­tist may be crazily at­tached to their per­sonal the­o­ries. Nor do we as­sume that any­one can be trained out of this ten­dency—we don’t try to choose Emi­nent Judges who are sup­posed to be im­par­tial.

In­stead, we try to har­ness the in­di­vi­d­ual sci­en­tist’s stub­born de­sire to prove their per­sonal the­ory, by say­ing: “Make a new ex­per­i­men­tal pre­dic­tion, and do the ex­per­i­ment. If you’re right, and the ex­per­i­ment is repli­cated, you win.” So long as sci­en­tists be­lieve this is true, they have a mo­tive to do ex­per­i­ments that can falsify their own the­o­ries. Only by ac­cept­ing the pos­si­bil­ity of defeat is it pos­si­ble to win. And any great claim will re­quire repli­ca­tion; this gives sci­en­tists a mo­tive to be hon­est, on pain of great em­bar­rass­ment.

And so the stub­born­ness of in­di­vi­d­ual sci­en­tists is har­nessed to pro­duce a steady stream of knowl­edge at the group level. The Sys­tem is some­what more trust­wor­thy than its parts.

Liber­tar­i­anism se­cretly re­lies on most in­di­vi­d­u­als be­ing proso­cial enough to tip at a restau­rant they won’t ever visit again. An econ­omy of gen­uinely self­ish hu­man-level agents would im­plode. Similarly, Science re­lies on most sci­en­tists not com­mit­ting sins so egre­gious that they can’t ra­tio­nal­ize them away.

To the ex­tent that sci­en­tists be­lieve they can pro­mote their the­o­ries by play­ing aca­demic poli­tics—or game the statis­ti­cal meth­ods to po­ten­tially win with­out a chance of los­ing—or to the ex­tent that no­body both­ers to repli­cate claims—sci­ence de­grades in effec­tive­ness. But it de­grades grace­fully, as such things go.

The part where the suc­cess­ful pre­dic­tions be­long to the the­ory and the­o­rists who origi­nally made them, and can­not just be stolen by a the­ory that comes along later—with­out a novel ex­per­i­men­tal pre­dic­tion—is an im­por­tant fea­ture of this so­cial pro­cess.

The fi­nal up­shot is that Science is not eas­ily rec­on­ciled with prob­a­bil­ity the­ory. If you do a prob­a­bil­ity-the­o­retic calcu­la­tion cor­rectly, you’re go­ing to get the ra­tio­nal an­swer. Science doesn’t trust your ra­tio­nal­ity, and it doesn’t rely on your abil­ity to use prob­a­bil­ity the­ory as the ar­biter of truth. It wants you to set up a defini­tive ex­per­i­ment.

Re­gard­ing Science as a mere ap­prox­i­ma­tion to some prob­a­bil­ity-the­o­retic ideal of ra­tio­nal­ity… would cer­tainly seem to be ra­tio­nal. There seems to be an ex­tremely rea­son­able-sound­ing ar­gu­ment that Bayes’s The­o­rem is the hid­den struc­ture that ex­plains why Science works. But to sub­or­di­nate Science to the grand schema of Bayesi­anism, and let Bayesi­anism come in and over­ride Science’s ver­dict when that seems ap­pro­pri­ate, is not a triv­ial step!

Science is built around the as­sump­tion that you’re too stupid and self-de­ceiv­ing to just use Solomonoff in­duc­tion. After all, if it was that sim­ple, we wouldn’t need a so­cial pro­cess of sci­ence… right?

So, are you go­ing to be­lieve in faster-than-light quan­tum “col­lapse” fairies af­ter all? Or do you think you’re smarter than that?