Do Scientists Already Know This Stuff?

poke alleges:

“Be­ing able to cre­ate rele­vant hy­pothe­ses is an im­por­tant skill and one a sci­en­tist spends a great deal of his or her time de­vel­op­ing. It may not be part of the tra­di­tional de­scrip­tion of sci­ence but that doesn’t mean it’s not in­cluded in the ac­tual so­cial in­sti­tu­tion of sci­ence that pro­duces ac­tual real sci­ence here in the real world; it’s your de­scrip­tion and not sci­ence that is faulty.”

I know I’ve been call­ing my younger self “stupid” but that is a figure of speech; “un­skil­lfully wield­ing high in­tel­li­gence” would be more pre­cise. Eliezer18 was not in the habit of mak­ing ob­vi­ous mis­takes—it’s just that his “ob­vi­ous” wasn’t my “ob­vi­ous”.

No, I did not go through the tra­di­tional ap­pren­tice­ship. But when I look back, and see what Eliezer18 did wrong, I see plenty of mod­ern sci­en­tists mak­ing the same mis­takes. I can­not de­tect any sign that they were bet­ter warned than my­self.

Sir Roger Pen­rose—a world-class physi­cist—still thinks that con­scious­ness is caused by quan­tum grav­ity. I ex­pect that no one ever warned him against mys­te­ri­ous an­swers to mys­te­ri­ous ques­tions—only told him his hy­pothe­ses needed to be falsifi­able and have em­piri­cal con­se­quences. Just like Eliezer18.

“Con­scious­ness is caused by quan­tum grav­ity” has testable im­pli­ca­tions: It im­plies that you should be able to look at neu­rons and dis­cover a co­her­ent quan­tum su­per­po­si­tion (whose col­lapse?) con­tributes to in­for­ma­tion-pro­cess­ing, and that you won’t ever be able to re­pro­duce a neu­ron’s in­put-out­put be­hav­ior us­ing a com­putable microanatom­i­cal simu­la­tion...

...but even af­ter you say “Con­scious­ness is caused by quan­tum grav­ity”, you don’t an­ti­ci­pate any­thing about how your brain thinks “I think there­fore I am!” or the mys­te­ri­ous red­ness of red, that you did not an­ti­ci­pate be­fore, even though you feel like you know a cause of it. This is a tremen­dous dan­ger sign, I now re­al­ize, but it’s not the dan­ger sign that I was warned against, and I doubt that Pen­rose was ever told of it by his the­sis ad­vi­sor. For that mat­ter, I doubt that Niels Bohr was ever warned against it when it came time to for­mu­late the Copen­hagen In­ter­pre­ta­tion.

As far as I can tell, the rea­son Eliezer18 and Sir Roger Pen­rose and Niels Bohr were not warned, is that no stan­dard warn­ing ex­ists.

I did not gen­er­al­ize the con­cept of “mys­te­ri­ous an­swers to mys­te­ri­ous ques­tions”, in that many words, un­til I was writ­ing a Bayesian anal­y­sis of what dis­t­in­guishes tech­ni­cal, non­tech­ni­cal and semitech­ni­cal sci­en­tific ex­pla­na­tions. Now, the fi­nal out­put of that anal­y­sis, can be phrased non­tech­ni­cally in terms of four dan­ger signs:

  • First, the ex­pla­na­tion acts as a cu­ri­os­ity-stop­per rather than an an­ti­ci­pa­tion-con­trol­ler.

  • Se­cond, the hy­poth­e­sis has no mov­ing parts—the se­cret sauce is not a spe­cific com­plex mechanism, but a blankly solid sub­stance or force.

  • Third, those who proffer the ex­pla­na­tion cher­ish their ig­no­rance; they speak proudly of how the phe­nomenon defeats or­di­nary sci­ence or is un­like merely mun­dane phe­nom­ena.

  • Fourth, even af­ter the an­swer is given, the phe­nomenon is still a mys­tery and pos­sesses the same qual­ity of won­der­ful in­ex­pli­ca­bil­ity that it had at the start.

In prin­ci­ple, all this could have been said in the im­me­di­ate af­ter­math of vi­tal­ism. Just like el­e­men­tary prob­a­bil­ity the­ory could have been in­vented by Archimedes, or the an­cient Greeks could have the­o­rized nat­u­ral se­lec­tion. But in fact no one ever warned me against any of these four dan­gers, in those terms—the clos­est be­ing the warn­ing that hy­pothe­ses should have testable con­se­quences. And I didn’t con­cep­tu­al­ize the warn­ing signs ex­plic­itly un­til I was try­ing to think of the whole af­fair in terms of prob­a­bil­ity dis­tri­bu­tions—some de­gree of overkill was re­quired.

I sim­ply have no rea­son to be­lieve that these warn­ings are passed down in sci­en­tific ap­pren­tice­ships—cer­tainly not to a ma­jor­ity of sci­en­tists. Among other things, it is ad­vice for han­dling situ­a­tions of con­fu­sion and de­spair, sci­en­tific chaos. When would the av­er­age sci­en­tist or av­er­age men­tor have an op­por­tu­nity to use that kind of tech­nique?

We just got through dis­cussing the sin­gle-world fi­asco in physics. Clearly, no one told them about the for­mal defi­ni­tion of Oc­cam’s Ra­zor, in whispered ap­pren­tice­ship or oth­er­wise.

There is a known effect where great sci­en­tists have mul­ti­ple great stu­dents. This may well be due to the men­tors pass­ing on skills that they can’t de­scribe. But I don’t think that counts as part of stan­dard sci­ence. And if the great men­tors haven’t been able to put their guidance into words and pub­lish it gen­er­ally, that’s not a good sign for how well these things are un­der­stood.

Rea­son­ing in the ab­sence of definite ev­i­dence with­out go­ing in­stan­ta­neously com­pletely wrong is re­ally re­ally hard. When you’re learn­ing in school, you can miss one point, and then be taught fifty other points that hap­pen to be cor­rect. When you’re rea­son­ing out new knowl­edge in the ab­sence of crush­ingly over­whelming guidance, you can miss one point and wake up in Outer Mon­go­lia fifty steps later.

I am pretty sure that sci­en­tists who switch off their brains and re­lax with some com­fortable non­sense as soon as they leave their own spe­cialties, do not re­al­ize that minds are en­g­ines and that there is a causal story be­hind ev­ery trust­wor­thy be­lief. Nor, I sus­pect, were they ever told that there is an ex­act ra­tio­nal prob­a­bil­ity given a state of ev­i­dence, which has no room for whims; even if you can’t calcu­late the an­swer, and even if you don’t hear any au­thor­i­ta­tive com­mand for what to be­lieve.

I doubt that sci­en­tists who are asked to pon­tif­i­cate on the fu­ture by the me­dia, who sketch amaz­ingly de­tailed pic­tures of Life in 2050, were ever taught about the con­junc­tion fal­lacy. Or how the rep­re­sen­ta­tive­ness heuris­tic can make more de­tailed sto­ries seem more plau­si­ble, even as each ex­tra de­tail drags down the prob­a­bil­ity. The no­tion of ev­ery added de­tail need­ing its own sup­port—of not be­ing able to make up big de­tailed sto­ries that sound just like the de­tailed sto­ries you were taught in sci­ence or his­tory class—is ab­solutely vi­tal to pre­cise think­ing in the ab­sence of definite ev­i­dence. But how would a no­tion like that get into the stan­dard sci­en­tific ap­pren­tice­ship? The cog­ni­tive bias was un­cov­ered only a few decades ago, and not pop­u­larized un­til very re­cently.

Then there’s af­fec­tive death spirals around no­tions like “emer­gence” or “com­plex­ity” which are suffi­ciently vaguely defined that you can say lots of nice things about them. There’s whole aca­demic sub­fields built around the kind of mis­takes that Eliezer18 used to make! (Though I never fell for the “emer­gence” thing.)

I some­times say that the goal of sci­ence is to amass such an enor­mous moun­tain of ev­i­dence that not even sci­en­tists can ig­nore it: and that this is the dis­t­in­guish­ing fea­ture of a sci­en­tist, a non-sci­en­tist will ig­nore it any­way.

If there can ex­ist some amount of ev­i­dence so crush­ing that you fi­nally de­spair, stop mak­ing ex­cuses and just give up—drop the old the­ory and never men­tion it again—then this is all it takes to let the ratchet of Science turn for­ward over time, and raise up a tech­nolog­i­cal civ­i­liza­tion. Con­trast to re­li­gion.

Books by Carl Sa­gan and Martin Gard­ner and the other veins of Tra­di­tional Ra­tion­al­ity are meant to ac­com­plish this differ­ence: to trans­form some­one from a non-sci­en­tist into a po­ten­tial sci­en­tist, and guard them from ex­per­i­men­tally dis­proven mad­ness.

What fur­ther train­ing does a pro­fes­sional sci­en­tist get? Some fre­quen­tist stats classes on how to calcu­late statis­ti­cal sig­nifi­cance. Train­ing in stan­dard tech­niques that will let them churn out pa­pers within a solidly es­tab­lished paradigm.

If Science de­manded more than this from the av­er­age sci­en­tist, I don’t think it would be pos­si­ble for Science to get done. We have prob­lems enough from peo­ple who sneak in with­out the drop-dead-ba­sic qual­ifi­ca­tions.

Nick Tar­leton sum­ma­rized the re­sult­ing prob­lem very well—bet­ter than I did, in fact: If you come up with a bizarre-seem­ing hy­poth­e­sis not yet ruled out by the ev­i­dence, and try to test it ex­per­i­men­tally, Science doesn’t call you a bad per­son. Science doesn’t trust its el­ders to de­cide which hy­pothe­ses “aren’t worth test­ing”. But this is a care­fully lax so­cial stan­dard, and if you try to trans­late it into a stan­dard of in­di­vi­d­ual epistemic ra­tio­nal­ity, it lets you be­lieve far too much. Drop­ping back into the anal­ogy with prag­matic-dis­trust-based-liber­tar­i­anism, it’s the differ­ence be­tween “Ci­garettes shouldn’t be ille­gal” and “Go smoke a Mar­lboro”.

Do you re­mem­ber ever be­ing warned against that mis­take, in so many words? Then why wouldn’t peo­ple make ex­actly that er­ror? How many peo­ple will spon­ta­neously go an ex­tra mile and be even stric­ter with them­selves? Some, but not many.

Many sci­en­tists will be­lieve all man­ner of ridicu­lous things out­side the lab­o­ra­tory, so long as they can con­vince them­selves it hasn’t been definitely dis­proven, or so long as they man­age not to ask. Is there some stan­dard lec­ture that grad stu­dents get, of which peo­ple see this folly, and ask, “Were they ab­sent from class that day?” No, as far as I can tell.

Maybe if you’re su­per lucky and get a fa­mous men­tor, they’ll tell you rare per­sonal se­crets like “Ask your­self which are the im­por­tant prob­lems in your field, and then work on one of those, in­stead of fal­ling into some­thing easy and triv­ial” or “Be more care­ful than the jour­nal ed­i­tors de­mand; look for new ways to guard your ex­pec­ta­tions from in­fluenc­ing the ex­per­i­ment, even if it’s not stan­dard.”

But I re­ally don’t think there’s a huge se­cret stan­dard sci­en­tific tra­di­tion of pre­ci­sion-grade ra­tio­nal rea­son­ing on sparse ev­i­dence. Half of all the sci­en­tists out there still be­lieve they be­lieve in God! The more difficult skills are not stan­dard!