Should We Ban Physics?

No­bel lau­re­ate Marie Curie died of aplas­tic ane­mia, the vic­tim of ra­di­a­tion from the many fas­ci­nat­ing glow­ing sub­stances she had learned to iso­late.

How could she have known? And the an­swer, as far as I can tell, is that she couldn’t. The only way she could have avoided death was by be­ing too scared of any­thing new to go near it. Would ban­ning physics ex­per­i­ments have saved Curie from her­self?

But far more can­cer pa­tients than just one per­son have been saved by ra­di­a­tion ther­apy. And the real cost of ban­ning physics is not just los­ing that one ex­per­i­ment—it’s los­ing physics. No more In­dus­trial Revolu­tion.

Some of us fall, and the hu­man species car­ries on, and ad­vances; our mod­ern world is built on the backs, and some­times the bod­ies, of peo­ple who took risks. My father is fond of say­ing that if the au­to­mo­bile were in­vented nowa­days, the sad­dle in­dus­try would ar­range to have it out­lawed.

But what if the laws of physics had been differ­ent from what they are? What if Curie, by iso­lat­ing and purify­ing the glowy stuff, had caused some­thing akin to a fis­sion chain re­ac­tion gone crit­i­cal… which, the laws of physics be­ing differ­ent, had ig­nited the at­mo­sphere or pro­duced a strangelet?

At the re­cent Global Catas­trophic Risks con­fer­ence, some­one pro­posed a policy pre­scrip­tion which, I ar­gued, amounted to a ban on all physics ex­per­i­ments in­volv­ing the pro­duc­tion of novel phys­i­cal situ­a­tions—as op­posed to mea­sur­ing ex­ist­ing phe­nom­ena. You can weigh a rock, but you can’t purify ra­dium, and you can’t even ex­pose the rock to X-rays un­less you can show that ex­actly similar X-rays hit rocks all the time. So the Large Hadron Col­lider, which pro­duces col­li­sions as en­er­getic as cos­mic rays, but not ex­actly the same as cos­mic rays, would be off the menu.

After all, when­ever you do some­thing new, even if you calcu­late that ev­ery­thing is safe, there is surely some prob­a­bil­ity of be­ing mis­taken in the calcu­la­tion—right?

So the one who pro­posed the policy, dis­agreed that their policy cashed out to a blan­ket ban on physics ex­per­i­ments. And dis­cus­sion is in progress, so I won’t talk fur­ther about their policy ar­gu­ment.

But if you con­sider the policy of “Ban Physics”, and leave aside the to­tal poli­ti­cal in­fea­si­bil­ity, I think the strongest way to frame the is­sue—from the pro-ban view­point—would be as fol­lows:

Sup­pose that Teg­mark’s Level IV Mul­ti­verse is real—that all pos­si­ble math­e­mat­i­cal ob­jects, in­clud­ing all pos­si­ble phys­i­cal uni­verses with all pos­si­ble laws of physics, ex­ist. (Per­haps an­throp­i­cally weighted by their sim­plic­ity.)

Some­where in Teg­mark’s Level IV Mul­ti­verse, then, there have un­doubt­edly been cases where in­tel­li­gence arises some­where in a uni­verse with physics un­like this one—i.e., in­stead of a planet, life arises on a gi­gan­tic tri­an­gu­lar plate hang­ing sus­pended in the void—and that in­tel­li­gence ac­ci­den­tally de­stroys its world, per­haps its uni­verse, in the course of a physics ex­per­i­ment.

Maybe they ex­per­i­ment with alchemy, bring to­gether some com­bi­na­tion of sub­stances that were never brought to­gether be­fore, and cat­alyze a change in their at­mo­sphere. Or maybe they man­age to break their tri­an­gu­lar plate, whose pieces fall and break other tri­an­gu­lar plates.

So, across the whole of the Teg­mark Level IV mul­ti­verse—con­tain­ing all pos­si­ble phys­i­cal uni­verses with all laws of physics, weighted by the laws’ sim­plic­ity:

What frac­tion of sen­tient species that try to fol­low the policy “Ban all physics ex­per­i­ments in­volv­ing situ­a­tions with a re­mote pos­si­bil­ity of be­ing novel, un­til you can aug­ment your own in­tel­li­gence enough to do er­ror-free cog­ni­tion”;

And what frac­tion of sen­tient species that go ahead and do physics ex­per­i­ments;

Sur­vive in the long term, on av­er­age?

In the case of the hu­man species, try­ing to ban chem­istry would hardly have been effec­tive—but sup­pos­ing that a species ac­tu­ally could make a col­lec­tive de­ci­sion like that, it’s at least not clear-cut which frac­tion would be larger across the whole mul­ti­verse. (We, in our uni­verse, have already learned that you can’t eas­ily de­stroy the world with alchemy.)

Or an even tougher ques­tion: On av­er­age, across the mul­ti­verse, do you think you would ad­vise an in­tel­li­gent species to stop perform­ing novel physics ex­per­i­ments dur­ing the in­ter­val af­ter it figures out how to build tran­sis­tors and be­fore it builds AI?