High-precision claims may be refuted without being replaced with other high-precision claims

Link post

There’s a com­mon crit­i­cism of the­ory-crit­i­cism which goes along the lines of:

Well, sure, this the­ory isn’t ex­actly right. But it’s the best the­ory we have right now. Do you have a bet­ter the­ory? If not, you can’t re­ally claim to have re­futed the the­ory, can you?

This is wrong. This is falsifi­ca­tion-re­sist­ing the­ory-apol­o­gism. Karl Pop­per would be livid.

The rele­vant rea­son why it’s wrong is that the­o­ries make high-pre­ci­sion claims. For ex­am­ple, the stan­dard the­ory of ar­ith­metic says 561+413=974. Not 975 or 973 or 97.4000001, but ex­actly 974. If ar­ith­metic didn’t have this guaran­tee, math would look very differ­ent from how it cur­rently looks (it would be nec­es­sary to ac­count for pos­si­ble small jumps in ar­ith­metic op­er­a­tions).

A sin­gle bit flip in the state of a com­puter pro­cess can crash the whole pro­gram. Similarly, high-pre­ci­sion the­o­ries rely on pre­cise in­var­i­ants, and even small vi­o­la­tions of these in­var­i­ants sink the the­ory’s claims.

To a first ap­prox­i­ma­tion, a com­puter ei­ther (a) al­most always works (>99.99% prob­a­bil­ity of get­ting the right an­swer) or (b) doesn’t work (<0.01% prob­a­bil­ity of get­ting the right an­swer). There are edge cases such as ran­domly crash­ing com­put­ers or com­put­ers with small float­ing point er­rors. How­ever, even a com­puter that crashes ev­ery few min­utes func­tions very pre­cisely cor­rectly in >99% of sec­onds that it runs.

If a com­puter makes ran­dom small er­rors 0.01% of the time in e.g. ar­ith­metic op­er­a­tions, it’s not an al­most-work­ing com­puter, it’s a com­pletely non-func­tion­ing com­puter, that will crash al­most im­me­di­ately.

The claim that a given al­gorithm or cir­cuit re­ally adds two num­bers is very pre­cise. Even a sin­gle pair of num­bers that it adds in­cor­rectly re­futes the claim, and very much risks mak­ing this al­gorithm/​cir­cuit use­less. (The rest of the pro­gram would not be able to rely on guaran­tees, and would in­stead need to know the do­main in which the al­gorithm/​cir­cuit func­tions; this would sig­nifi­cantly com­pli­cate the rea­son­ing about cor­rect­ness)

Im­por­tantly, such a re­fu­ta­tion does not need to come along with an al­ter­na­tive the­ory of what the al­gorithm/​cir­cuit does. To re­fute the claim that it adds num­bers, it’s suffi­cient to show a sin­gle coun­terex­am­ple with­out sug­gest­ing an al­ter­na­tive. Qual­ity as­surance pro­cesses are pri­mar­ily about iden­ti­fy­ing er­rors, not about spec­i­fy­ing the be­hav­ior of non-func­tion­ing prod­ucts.

A Bayesian may ar­gue that the re­futer must have an al­ter­na­tive be­lief about the cir­cuit. While this is true as­sum­ing the re­futer is Bayesian, such a be­lief need not be high-pre­ci­sion. It may be a high-en­tropy dis­tri­bu­tion. And if the re­futer is a hu­man, they are not a Bayesian (that would take too much com­pute), and will in­stead have a vague rep­re­sen­ta­tion of the cir­cuit as “some­thing do­ing some un­speci­fied thing”, with some vague in­tu­itions about what sorts of things are more likely than other things. In any case, the Bayesian crit­i­cism cer­tainly doesn’t re­quire the re­futer to re­place the claim about the cir­cuit with an al­ter­na­tive high-pre­ci­sion claim; ei­ther a low-pre­ci­sion be­lief or a lack-of-be­lief will do.

The case of com­puter al­gorithms is par­tic­u­larly clear, but of course this ap­plies el­se­where:

  • If there’s a sin­gle ex­cep­tion to con­ser­va­tion of en­ergy, then a high per­centage of mod­ern physics the­o­ries com­pletely break. The sin­gle ex­cep­tion may be suffi­cient to, for ex­am­ple, cre­ate per­pet­ual mo­tion ma­chines. Physics, then, makes a very high-pre­ci­sion claim that en­ergy is con­served, and a re­futer of this claim need not sup­ply an al­ter­na­tive physics.

  • If a text is claimed to be the word of God and to­tally liter­ally true, then a sin­gle ex­am­ple of a definitely-wrong claim in the text is suffi­cient to re­fute the claim. It isn’t nec­es­sary to sup­ply a bet­ter re­li­gion; the origi­nal text should lose any credit it was as­signed for be­ing the word of God.

  • If ra­tio­nal agent the­ory is a bad fit for effec­tive hu­man be­hav­ior, then the pre­cise pre­dic­tions of microe­co­nomic the­ory (e.g. the op­tion of trade never re­duc­ing ex­pected util­ity for ei­ther ac­tor, or the effi­cient mar­ket hy­poth­e­sis be­ing true) are al­most cer­tainly false. It isn’t nec­es­sary to sup­ply an al­ter­na­tive the­ory of effec­tive hu­man be­hav­ior to re­ject these pre­dic­tions.

  • If it is claimed philo­soph­i­cally that agents can only gain knowl­edge through sense-data, then a sin­gle ex­am­ple of an agent gain­ing knowl­edge with­out cor­re­spond­ing sense-data (e.g. men­tal ar­ith­metic) is suffi­cient to re­fute the claim. It isn’t nec­es­sary to sup­ply an al­ter­na­tive the­ory of how agents gain knowl­edge for this to re­fute the strongly em­piri­cal the­ory.

  • If it is claimed that he­do­nic util­ity is the only valuable thing, then a sin­gle ex­am­ple of a valuable thing other than he­do­nic util­ity is suffi­cient to re­fute the claim. It isn’t nec­es­sary to sup­ply an al­ter­na­tive the­ory of value.

A the­ory that has been re­futed re­mains con­tex­tu­ally “use­ful” in a sense, but it’s the walk­ing dead. It isn’t re­ally true ev­ery­where, and:

  • Machines be­lieved to func­tion on the ba­sis of the the­ory can­not be trusted to be highly reliable

  • Ex­cep­tions to the the­ory can some­times be man­u­fac­tured at will (this is rele­vant in both se­cu­rity and philos­o­phy)

  • The the­ory may make sig­nifi­cantly worse pre­dic­tions on av­er­age than a skep­ti­cal high-en­tropy prior or low-pre­ci­sion in­tu­itive guess­work, due to be­ing pre­cisely wrong rather than imprecise

  • Gen­er­a­tive in­tel­lec­tual pro­cesses will even­tu­ally dis­card it, prefer­ring in­stead an al­ter­na­tive high-pre­ci­sion the­ory or low-pre­ci­sion in­tu­itions or skepticism

  • The the­ory will go on do­ing dam­age through mak­ing false high-pre­ci­sion claims

The fact that false high-pre­ci­sion claims are gen­er­ally more dam­ag­ing than false low-pre­ci­sion claims is im­por­tant eth­i­cally. High-pre­ci­sion claims are of­ten used to eth­i­cally jus­tify co­er­cion, vi­o­lence, and so on, where low-pre­ci­sion claims would have been in­suffi­cient. For ex­am­ple, im­pris­on­ing some­one for a long time may be eth­i­cally jus­tified if they definitely com­mit­ted a se­ri­ous crime, but is much less likely to be if the be­lief that they com­mit­ted a crime is merely a low-pre­ci­sion guess, not val­i­dated by any high-pre­ci­sion check­ing ma­chine. Like­wise for psy­chi­a­try, which jus­tifies in­cred­ibly high lev­els of co­er­cion on the ba­sis of pre­cise-look­ing claims about differ­ent kinds of cog­ni­tive im­pair­ment and their reme­dies.

There­fore, I be­lieve there is an eth­i­cal im­per­a­tive to ap­ply skep­ti­cism to high-pre­ci­sion claims, and to al­low them to be falsified by ev­i­dence, even with­out know­ing what the real truth is other than that it isn’t as the high-pre­ci­sion claim says it is.