How effectively can we plan for future decades? (initial findings)

Cross-posted from MIRI’s blog.

MIRI aims to do re­search now that in­creases hu­man­ity’s odds of suc­cess­fully man­ag­ing im­por­tant AI-re­lated events that are at least a few decades away. Thus, we’d like to know: To what de­gree can we take ac­tions now that will pre­dictably have pos­i­tive effects on AI-re­lated events decades from now? And, which fac­tors pre­dict suc­cess and failure in plan­ning for decades-dis­tant events that share im­por­tant fea­tures with fu­ture AI events?

Or, more gen­er­ally: How effec­tively can hu­mans plan for fu­ture decades? Which fac­tors pre­dict suc­cess and failure in plan­ning for fu­ture decades?

To in­ves­ti­gate these ques­tions, we asked Jonah Sinick to ex­am­ine his­tor­i­cal at­tempts to plan for fu­ture decades and sum­ma­rize his find­ings. We pre-com­mit­ted to pub­lish­ing our en­tire email ex­change on the topic (with minor edit­ing), just as Jonah had done pre­vi­ously with GiveWell on the sub­ject of in­sec­ti­cide-treated nets. The post be­low is a sum­mary of find­ings from our full email ex­change (.docx) so far.

We de­cided to pub­lish our ini­tial find­ings af­ter in­ves­ti­gat­ing only a few his­tor­i­cal cases. This al­lows us to gain feed­back on the value of the pro­ject, as well as sug­ges­tions for im­prove­ment, be­fore con­tin­u­ing. It also means that we aren’t yet able to draw any con­fi­dent con­clu­sions about our core ques­tions.

The most sig­nifi­cant re­sults from this pro­ject so far are:

  1. Jonah’s ini­tial im­pres­sions about The Limits to Growth (1972), a fa­mous fore­cast­ing study on pop­u­la­tion and re­source de­ple­tion, were that its long-term pre­dic­tions were mostly wrong, and also that its au­thors (at the time of writ­ing it) didn’t have cre­den­tials that would pre­dict fore­cast­ing suc­cess. Upon read­ing the book, its crit­ics, and its defen­ders, Jonah con­cluded that many crit­ics and defen­ders had se­ri­ously mis­rep­re­sented the book, and that the book it­self ex­hibits high epistemic stan­dards and does not make sig­nifi­cant pre­dic­tions that turned out to be wrong.

  2. Svante Ar­rhe­nius (1859-1927) did a sur­pris­ingly good job of cli­mate mod­el­ing given the limited in­for­ma­tion available to him, but he was nev­er­the­less wrong about two im­por­tant policy-rele­vant fac­tors. First, he failed to pre­dict how quickly car­bon emis­sions would in­crease. Se­cond, he pre­dicted that global warm­ing would have pos­i­tive rather than nega­tive hu­man­i­tar­ian im­pacts. If more peo­ple had taken Ar­rhe­nius’ pre­dic­tions se­ri­ously and burned fos­sil fuels faster for hu­man­i­tar­ian rea­sons, then to­day’s sci­en­tific con­sen­sus on the effects of cli­mate change sug­gests that the hu­man­i­tar­ian effects would have been nega­tive.

  3. In ret­ro­spect, Nor­bert Wiener’s con­cerns about the medium-term dan­gers of in­creased au­toma­tion ap­pear naive, and it seems likely that even at the time, bet­ter epistemic prac­tices would have yielded sub­stan­tially bet­ter pre­dic­tions.

  4. Upon ini­tial in­ves­ti­ga­tion, sev­eral his­tor­i­cal cases seemed un­likely to shed sub­stan­tial light on our core ques­tions: Nor­man Ras­mussen’s anal­y­sis of the safety of nu­clear power plants, Leo Szilard’s choice to keep se­cret a patent re­lated to nu­clear chain re­ac­tions, Cold War plan­ning efforts to win decades later, and sev­eral cases of “eth­i­cally con­cerned sci­en­tists.”

  5. Upon ini­tial in­ves­ti­ga­tion, two his­tor­i­cal cases seemed like they might shed light on our core ques­tions, but only af­ter many hours of ad­di­tional re­search on each of them: China’s one-child policy, and the Ford Foun­da­tion’s im­pact on In­dia’s 1991 fi­nan­cial crisis.

  6. We listed many other his­tor­i­cal cases that may be worth in­ves­ti­gat­ing.

The pro­ject has also pro­duced a chap­ter-by-chap­ter list of some key les­sons from Nate Silver’s The Sig­nal and the Noise, available here.

Fur­ther de­tails are given be­low. For sources and more, please see our full email ex­change (.docx).

The Limits to Growth

In his ini­tial look at The Limits to Growth (1972), Jonah noted that the au­thors were fairly young at the time of writ­ing (the old­est was 31), and they lacked cre­den­tials in long-term fore­cast­ing. More­over, it ap­peared that Limits to Growth pre­dicted a sort of dooms­day sce­nario—ala Ehrlich’s The Pop­u­la­tion Bomb (1968) - that had failed to oc­cur. In par­tic­u­lar, it ap­peared that Limits to Growth had failed to ap­pre­ci­ate Ju­lian Si­mon’s point that other re­sources would sub­sti­tute for de­pleted re­sources. Upon read­ing the book, Jonah found that:

  • The book avoids strong, un­con­di­tional claims. Its core claim is that if ex­po­nen­tial growth of re­source us­age con­tinues, then there will likely be a so­cietal col­lapse by 2100.

  • The book was care­ful to qual­ify its claims, and met high epistemic stan­dards. Jonah wrote: “The book doesn’t look naive even in ret­ro­spect, which is im­pres­sive given that it was writ­ten 40 years ago. ”

  • The au­thors dis­cuss sub­sti­tutabil­ity at length in chap­ter 4.

  • The book dis­cusses miti­ga­tion at a the­o­ret­i­cal level, but doesn’t give ex­plicit policy recom­men­da­tions, per­haps be­cause the is­sues in­volved were too com­plex.

Svante Arrhenius

Derived more than a cen­tury ago, Svante Ar­rhe­nius’ equa­tion for how the Earth’s tem­per­a­ture varies as a func­tion of con­cen­tra­tion of car­bon diox­ide is the same equa­tion used to­day. But while Ar­rhe­nius’ cli­mate mod­el­ing was im­pres­sive given the in­for­ma­tion available to him at the time, he failed to pre­dict (by a large mar­gin) how quickly fos­sil fuels would be burned. He also pre­dicted that global warm­ing would have pos­i­tive hu­man­i­tar­ian effects, but based on our cur­rent un­der­stand­ing, the ex­pected hu­man­i­tar­ian effects seem nega­tive.

Ar­rhe­nius’s pre­dic­tions were mostly ig­nored at the time, but had peo­ple taken them se­ri­ously and burned fos­sil fuels more quickly, the hu­man­i­tar­ian effects would prob­a­bly have been nega­tive.

Nor­bert Wiener

As Jonah ex­plains, Nor­bert Wiener (1894-1964) “be­lieved that un­less coun­ter­mea­sures were taken, au­toma­tion would ren­der low skil­led work­ers un­em­ploy­able. He be­lieved that this would pre­cip­i­tate an eco­nomic crisis far worse than that of the Great De­pres­sion.” Nearly 50 years af­ter his death, this doesn’t seem to have hap­pened much, though it may even­tu­ally hap­pen.

Jonah’s im­pres­sion is that Wiener had strong views on the sub­ject, doesn’t seem to have up­dated much in re­sponse to in­com­ing ev­i­dence, and seems to have re­lied to heav­ily on what Ber­lin (1953) and Tet­lock (2005) de­scribed as “hedge­hog” think­ing: “the fox knows many things, but the hedge­hog knows one big thing.”

Some his­tor­i­cal cases that seem un­likely to shed light on our questions

Ras­mussen (1975) is a prob­a­bil­is­tic risk as­sess­ment of nu­clear power plants, writ­ten be­fore any nu­clear power plant dis­asters had oc­curred. How­ever, Jonah con­cluded that this his­tor­i­cal case wasn’t very rele­vant to our spe­cific ques­tions about tak­ing ac­tions use­ful for decades-dis­tant AI out­comes, in part be­cause the is­sue is highly do­main spe­cific, and be­cause the re­port makes a large num­ber of small pre­dic­tions rather than a few salient pre­dic­tions.

In 1936, Leó Szilárd as­signed his chain re­ac­tion patent in a way that en­sured it would be kept se­cret from the Nazis. How­ever, Jonah con­cluded:

I think that this isn’t a good ex­am­ple of a non­triv­ial fu­ture pre­dic­tion. The de­struc­tive po­ten­tial seems pretty ob­vi­ous – any­thing that pro­duces a huge amount of con­cen­trated en­ergy can be used in a de­struc­tive way. As for the Nazis, Szilard was him­self Jewish and fled from the Nazis, and it seems pretty ob­vi­ous that one wouldn’t want a dan­ger­ous regime to ac­quire knowl­edge that has de­struc­tive po­ten­tial. It would be more im­pres­sive if the early de­vel­op­ers of quan­tum me­chan­ics had kept their re­search se­cret on ac­count of dimly be­ing aware of the pos­si­bil­ity of de­struc­tive po­ten­tial, or if Szilard had filed his patent se­cretly in a hy­po­thet­i­cal world in which the Nazi regime was years away.

Jonah briefly in­ves­ti­gated Cold War efforts aimed at win­ning the war decades later, but con­cluded that it was “too difficult to tie these efforts to war out­comes.”

Jonah also in­ves­ti­gated Kaj So­tala’s A brief his­tory of eth­i­cally con­cerned sci­en­tists. Most of the his­tor­i­cal cases cited there didn’t seem rele­vant to this pro­ject. Many cases in­volved “sci­en­tists con­ceal­ing their dis­cov­er­ies out of con­cern that they would be used for mil­i­tary pur­poses,” but this seems to be an in­creas­ingly ir­rele­vant sort of his­tor­i­cal case, since sci­ence and tech­nol­ogy mar­kets are now rel­a­tively effi­cient, and con­ceal­ing a dis­cov­ery rarely de­lays progress for very long (e.g. see Kelly 2011). Other cases in­volved efforts to re­duce the use of dan­ger­ous weapons for which the threat was im­mi­nent dur­ing the time of the ad­vo­cacy. There may be les­sons among these cases, but they ap­pear to be of rel­a­tively weak rele­vance to our cur­rent pro­ject.

Some his­tor­i­cal cases that might shed light on our ques­tions with much ad­di­tional research

Jonah performed an ini­tial in­ves­ti­ga­tion of the im­pacts of China’s one-child policy, and con­cluded that it would take many, many hours of re­search to de­ter­mine both the sign and the mag­ni­tude of the policy’s im­pacts.

Jonah also in­ves­ti­gated a case in­volv­ing the Ford Foun­da­tion. In a con­ver­sa­tion with GiveWell, Lant Pritch­ett said:

[One] ex­am­ple of trans­for­ma­tive philan­thropy is re­lated to In­dia’s re­cov­ery from its eco­nomic crisis of 1991. Other coun­tries had pre­vi­ously had similar crises and failed to im­ple­ment good poli­cies that would have al­lowed them to re­cover from their crises. By way of con­trast, In­dia im­ple­mented good poli­cies and re­cov­ered in a short time frame. Most of the key ac­tors who en­sured that In­dia im­ple­mented the poli­cies that it did were in­fluenced by a think tank es­tab­lished by the Ford Foun­da­tion ten years be­fore the crisis. The think tank ex­posed In­di­ans to rele­vant ideas from the de­vel­oped world about liber­al­iza­tion. The differ­ence be­tween (a) In­dia’s up­ward eco­nomic tra­jec­tory and (b) what its up­ward eco­nomic tra­jec­tory would have been if it had been un­suc­cess­ful in re­cov­er­ing from the 1991 crisis is in the trillions of dol­lars. As such, the Ford Foun­da­tion’s in­vest­ment in the think tank had a huge im­pact. For the ten years pre­ced­ing the crisis, it looked like the think tank was hav­ing no im­pact, but it turned out to have a huge im­pact.

Un­for­tu­nately, Jonah was un­able to find any sources or con­tacts that would al­low him to check whether this story is true.

Other his­tor­i­cal cases that might be worth investigating

His­tor­i­cal cases we iden­ti­fied but did not yet in­ves­ti­gate in­clude: