Contrarianism and reference class forecasting

I re­ally liked Robin’s point that main­stream sci­en­tists are usu­ally right, while con­trar­i­ans are usu­ally wrong. We don’t need to get into de­tails of the dis­pute—and usu­ally we can­not re­ally make an in­formed judg­ment with­out spend­ing too much time any­way—just figur­ing out who’s “main­stream” lets us know who’s right with high prob­a­bil­ity. It’s type of think­ing re­lated to refer­ence class fore­cast­ing—find a refer­ence class of similar situ­a­tions with known out­comes, and we get a pretty de­cent prob­a­bil­ity dis­tri­bu­tion over pos­si­ble out­comes.

Un­for­tu­nately de­cid­ing what’s the proper refer­ence class is not straight­for­ward, and can be a point of con­tention. If you put cli­mate change sci­en­tists in the refer­ence class of “main­stream sci­ence”, it gives great cre­dence to their find­ings. Peo­ple who doubt them can be freely dis­be­lieved, and any ar­gu­ments can be dis­missed by low suc­cess rate of con­trar­i­anism against main­stream sci­ence.

But, if you put cli­mate change sci­en­tists in refer­ence class of “highly poli­ti­cized sci­ence”, then the chance of them be­ing com­pletely wrong be­comes or­ders of mag­ni­tude higher. We have plenty of ex­am­ples where such sci­ence was com­pletely wrong and per­sisted in be­ing wrong in spite of over­whelming ev­i­dence, as with race and IQ, nu­clear win­ter, and pretty much ev­ery­thing in macroe­co­nomics. Chances of main­stream be­ing right, and con­trar­i­ans be­ing right are not too dis­similar in such cases.

Or, if the refer­ence class is “sci­ence-y Dooms­day pre­dic­tors”, then they’re al­most cer­tainly com­pletely wrong. See Paul Ehrlich (over­pop­u­la­tion), and Matt Sim­mons (peak oil) for some ex­am­ples, both treated ex­tremely se­ri­ously by main­stream me­dia at time. So far in spite of countless cases of sci­ence pre­dict­ing doom and gloom, not a sin­gle one of them turned out to be true, usu­ally not just barely enough to be dis­counted by an­thropic prin­ci­ple, but spec­tac­u­larly so. Cor­nu­copi­ans were vir­tu­ally always right.

It’s also pos­si­ble to use mul­ti­ple refer­ence classes—to view im­pact on cli­mate ac­cord­ing to “highly poli­ti­cized sci­ence” refer­ence class, and im­pact on hu­man well-be­ing ac­cord­ing to “sci­ence-y Dooms­day pre­dic­tors” refer­ence class, what’s more or less how I think about it.

I’m sure if you thought hard enough, you could come up with other plau­si­ble refer­ence classes, each lead­ing to any con­clu­sion you de­sire. I don’t see how one of these refer­ence class rea­son­ings is ob­vi­ously more valid than oth­ers, nor do I see any clear crite­ria for choos­ing the right refer­ence class. It seems as sub­jec­tive as Bayesian pri­ors, ex­cept we know in ad­vance we won’t have ev­i­dence nec­es­sary for our views to con­verge.

The prob­lem doesn’t arise only if you agree to refer­ence classes in ad­vance, as you can rea­son­ably do with the origi­nal ap­pli­ca­tion of fore­cast­ing costs of pub­lic pro­jects. Does it kill refer­ence class fore­cast­ing as a gen­eral tech­nique, or is there a way to save it?