After critical event W happens, they still won’t believe you

In gen­eral and across all in­stances I can think of so far, I do not agree with the part of your fu­tur­olog­i­cal fore­cast in which you rea­son, “After event W hap­pens, ev­ery­one will see the truth of propo­si­tion X, lead­ing them to en­dorse Y and agree with me about policy de­ci­sion Z.”

Ex­am­ple 1: “After a 2-year-old mouse is re­ju­ve­nated to al­low 3 years of ad­di­tional life, so­ciety will re­al­ize that hu­man re­ju­ve­na­tion is pos­si­ble, turn against death­ism as the prospect of lifes­pan /​ healthspan ex­ten­sion starts to seem real, and de­mand a huge Man­hat­tan Pro­ject to get it done.” (EDIT: This has not hap­pened, and the hy­po­thet­i­cal is mouse healthspan ex­ten­sion, not any­thing cry­onic. It’s be­ing cited be­cause this is Aubrey de Grey’s rea­son­ing be­hind the Methuse­lah Mouse Prize.)

Alter­na­tive pro­jec­tion: Some me­dia brouhaha. Lots of bioethi­cists act­ing con­cerned. Dis­cus­sion dies off af­ter a week. No­body thinks about it af­ter­ward. The rest of so­ciety does not rea­son the same way Aubrey de Grey does.

Ex­am­ple 2: “As AI gets more so­phis­ti­cated, ev­ery­one will re­al­ize that real AI is on the way and then they’ll start tak­ing Friendly AI de­vel­op­ment se­ri­ously.”

Alter­na­tive pro­jec­tion: As AI gets more so­phis­ti­cated, the rest of so­ciety can’t see any differ­ence be­tween the lat­est break­through re­ported in a press re­lease and that busi­ness ear­lier with Wat­son beat­ing Ken Jen­nings or Deep Blue beat­ing Kas­parov; it seems like the same sort of press re­lease to them. The same peo­ple who were talk­ing about robot over­lords ear­lier con­tinue to talk about robot over­lords. The same peo­ple who were talk­ing about hu­man ir­re­pro­ducibil­ity con­tinue to talk about hu­man spe­cial­ness. Con­cern is ex­pressed over tech­nolog­i­cal un­em­ploy­ment the same as to­day or Keynes in 1930, and this is used to fuel some­one’s pre­vi­ous ide­olog­i­cal com­mit­ment to a ba­sic in­come guaran­tee, in­equal­ity re­duc­tion, or what­ever. The same tiny seg­ment of un­usu­ally con­se­quen­tial­ist peo­ple are con­cerned about Friendly AI as be­fore. If any­one in the sci­ence com­mu­nity does start think­ing that su­per­in­tel­li­gent AI is on the way, they ex­hibit the same dis­tri­bu­tion of perfor­mance as mod­ern sci­en­tists who think it’s on the way, e.g. Hugo de Garis, Ben Go­ertzel, etc.

Con­sider the situ­a­tion in macroe­co­nomics. When the Fed­eral Re­serve dropped in­ter­est rates to nearly zero and started print­ing money via quan­ti­ta­tive eas­ing, we had some peo­ple loudly pre­dict­ing hy­per­in­fla­tion just be­cause the mon­e­tary base had, you know, gone up by a fac­tor of 10 or what­ever it was. Which is kind of un­der­stand­able. But still, a lot of main­stream economists (such as the Fed) thought we would not get hy­per­in­fla­tion, the im­plied spread on in­fla­tion-pro­tected Trea­suries and nu­mer­ous other in­di­ca­tors showed that the free mar­ket thought we were due for be­low-trend in­fla­tion, and then in ac­tual re­al­ity we got be­low-trend in­fla­tion. It’s one thing to dis­agree with economists, an­other thing to dis­agree with im­plied mar­ket fore­casts (why aren’t you bet­ting, if you re­ally be­lieve?) but you can still do it some­times; but when con­ven­tional eco­nomics, mar­ket fore­casts, and re­al­ity all agree on some­thing, it’s time to shut up and ask the economists how they knew. I had some cre­dence in in­fla­tion­ary wor­ries be­fore that ex­pe­rience, but not af­ter­ward… So what about the rest of the world? In the heav­ily sci­en­tific com­mu­nity you live in, or if you read econ­blogs, you will find that a num­ber of peo­ple ac­tu­ally have started to worry less about in­fla­tion and more about sub-trend nom­i­nal GDP growth. You will also find that right now these econ­blogs are hav­ing worry-fits about the Fed pre­ma­turely ex­it­ing QE and chok­ing off the re­cov­ery be­cause the el­derly se­nior peo­ple with power have up­dated more slowly than the econ­blogs. And in larger so­ciety, if you look at what hap­pens when Con­gress­crit­ters ques­tion Ber­nanke, you will find that they are all ter­ribly, ter­ribly con­cerned about in­fla­tion. Still. The same as be­fore. Some econ­blogs are very harsh on Ber­nanke be­cause the Fed did not print enough money, but when I look at the kind of pres­sure Ber­nanke was get­ting from Congress, he starts to look to me like some­thing of a hero just for fol­low­ing con­ven­tional macroe­co­nomics as much as he did.

That is­sue is a hell of a lot more clear-cut than the med­i­cal sci­ence for hu­man re­ju­ve­na­tion, which in turn is far more clear-cut eth­i­cally and policy-wise than is­sues in AI.

After event W hap­pens, a few more rel­a­tively young sci­en­tists will see the truth of propo­si­tion X, and the larger so­ciety won’t be able to tell a damn differ­ence. This won’t change the situ­a­tion very much, there are prob­a­bly already some sci­en­tists who en­dorse X, since X is prob­a­bly pretty pre­dictable even to­day if you’re un­bi­ased. The sci­en­tists who see the truth of X won’t all rush to en­dorse Y, any more than cur­rent sci­en­tists who take X se­ri­ously all rush to en­dorse Y. As for peo­ple in power lin­ing up be­hind your preferred policy op­tion Z, for­get it, they’re old and set in their ways and Z is rel­a­tively novel with­out a large ex­ist­ing con­stituency fa­vor­ing it. Ex­pect W to be used as ar­gu­ment fod­der to sup­port con­ven­tional policy op­tions that already have poli­ti­cal force be­hind them, and for Z to not even be on the table.