Reference class of the unclassreferenceable

One of the most use­ful tech­niques of ra­tio­nal­ity is tak­ing the out­side view, also known as refer­ence class fore­cast­ing. In­stead of think­ing too hard about par­tic­u­lars of a given situ­a­tion and tak­ing a guess which will in­vari­ably turned out to be highly bi­ased, one looks at out­comes of situ­a­tions which are similar in some es­sen­tial way.

Figur­ing out cor­rect refer­ence class might some­times be difficult, but even then it’s far more re­li­able than try­ing to guess while ig­nor­ing the ev­i­dence of similar cases. Now in some situ­a­tions we have pre­cise enough data that in­side view might give cor­rect an­swer—but for al­most all such cases I’d ex­pect out­side view to be as us­able and not far away in cor­rect­ness.

Some­thing that keeps puz­zling me is per­sis­tence of cer­tain be­liefs on less­wrong. Like be­lief in effec­tive­ness of cry­on­ics—refer­ence class of things promis­ing eter­nal (or very long) life is huge and has con­sis­tent 0% suc­cess rate. Refer­ence class of pre­dic­tions based on tech­nol­ogy which isn’t even re­motely here has per­haps non-zero but still ridicu­lously tiny suc­cess rate. I can­not think of any refer­ence class in which cry­on­ics does well. Like­wise be­lief in sin­gu­lar­ity—refer­ence class of be­liefs in com­ing of a new world, be it good or evil, is huge and with con­sis­tent 0% suc­cess rate. Refer­ence class of be­liefs in al­most om­nipo­tent good or evil be­ings has con­sis­tent 0% suc­cess rate.

And many fel­low ra­tio­nal­ists not only be­lieve that chances of cry­on­ics or sin­gu­lar­ity or AI are far from neg­ligible lev­els in­di­cated by the out­side view, they con­sider them highly likely or even nearly cer­tain!

There are a few ways how this situ­a­tion can be re­solved:

  • Bit­ing the out­side view bul­let like me, and as­sign­ing very low prob­a­bil­ity to them.

  • Find­ing a con­vinc­ing refer­ence class in which cry­on­ics, sin­gu­lar­ity, su­per­hu­man AI etc. are highly prob­a­ble—I in­vite you to try in com­ments, but I doubt this will lead any­where.

  • Or is there a class of situ­a­tions for which the out­side view is con­sis­tently and spec­tac­u­larly wrong; data is not good enough for pre­cise pre­dic­tions; and yet we some­how think we can pre­dict them re­li­ably?

How do you rec­on­cile them?