Pop­u­la­tion Aging as an Im­ped­i­ment to Ad­dress­ing Global Cata­strophic Risks

[ epi­stemic status: first Less Wrong post, de­vel­op­ing hy­po­thesis, seek­ing feed­back and help flesh­ing out the hy­po­thesis into some­thing that could be re­searched and about which a dis­cus­sion pa­per can be writ­ten. A com­ment/​con­tri­bu­tion to Eliezer Yudkowsky’s “Cog­nit­ive bi­ases po­ten­tially af­fect­ing judg­ment of global risks” in Bostrom & Cirkovic’s “Global Cata­strophic Risks” (2008) ]

Most of the Global Cata­strophic Risks we face in the 21st cen­tury, like an­thro­po­genic cli­mate change, comet and as­ter­oid im­pacts, pan­dem­ics, and un­con­trolled ar­ti­fi­cial in­tel­li­gence, are high im­pact (af­fect­ing the ma­jor­ity or all of hu­man­ity), of ter­minal in­tens­ity (pro­du­cing mass death, eco­nomic and so­cial dis­rup­tion, and in some cases po­ten­tial hu­man ex­tinc­tion), and are of highly un­cer­tain prob­ab­il­ity [1]. This last factor is one ma­jor factor mak­ing it dif­fi­cult to bring pub­lic at­ten­tion and polit­ical will to bear on mit­ig­at­ing them. This is crit­ical as all of our work/​re­search on AI safety and other is­sues will be for naught if there is no un­der­stand­ing or will to im­ple­ment them. Im­ple­ment­a­tion may not re­quire pub­lic in­volve­ment in some cases (AI safety may be man­age­able by con­sensus between AI re­search­ers, for ex­ample) but oth­ers, like the de­tec­tion of Earth or­bit cross­ing as­ter­oids and comets, may re­quire sig­ni­fic­ant ex­pendit­ure to build de­tect­ors, etc.

My in­terest at present is in ad­di­tional factors that make mus­ter­ing polit­ical and pub­lic will even more dif­fi­cult—given that these are hard prob­lems to in­terest people in in the first place, what factors make that even more dif­fi­cult? I be­lieve that the aging of pop­u­la­tions in the de­veloped world may be a crit­ical factor, pro­gress­ively re­dir­ect­ing so­ci­etal re­sources from long-term pro­jects like ad­vanced in­fra­struc­ture, or found­a­tional ba­sic sci­ence re­search (which ar­gu­ably AI Safety counts as), to­wards pro­vi­sion of health care and pen­sions.

Several factors make an aging de­veloped world pop­u­la­tion a factor in blunt­ing long-term plan­ning:

(1) Older people (age 65+), across the de­veloped world, vote more of­ten than younger people

(2) Voters are more read­ily mo­bil­ized to vote to pro­tect en­ti­tle­ments than to make in­vest­ments for the future

(3) Older voters have ac­cess to, and are more aware of, en­ti­tle­ments than are younger people

(4) Ex­pand­ing on (3), Bene­fits and en­ti­tle­ments are of par­tic­u­larly high sa­li­ence to the aged be­cause of their fail­ure to save ad­equately for re­tire­ment. This trend has been on­go­ing and seems un­likely to be due to cog­nit­ive bi­ases sur­round­ing fu­ture plan­ning.

(6) Long term in­vest­ments, re­search, and other pro­tec­tions/​mit­ig­a­tions against Global Cata­strophic Risks will re­quire a tradeoff with provid­ing be­ne­fits to present people

(7) Older people have more present fo­cus and less fu­ture fo­cus than younger people (to the ex­tent that younger people do—my an­ec­dotal data is that most people in­ter­ested in the far fu­ture of hu­man­ity are <50 years old, and a small sub­set of that <50 year old pop­u­la­tion). Strangely, even people with grand­chil­dren and great-grand­chil­dren ex­press lim­ited in­ter­ested in how their des­cend­ants will live and how safe their fu­tures will be.

#6 is the point on which I am most un­cer­tain (though I wel­come ques­tions and chal­lenges that I should be more un­cer­tain). Un­less ar­ti­fi­cial in­tel­li­gence and auto­ma­tion in the near term (15-30 years) provide really sub­stan­tial eco­nomic be­ne­fits, enough that ad­equate Global Cata­strophic Risk mit­ig­a­tion could be re­quisi­tioned without every­one no­ti­cing too much (and even then it may be a hard sell), it seems likely that fu­ture eco­nomic growth will be slower. Older work­ers, on av­er­age (my hunch says …) are harder to re­train, and harder to mo­tiv­ate to re­train to take new po­s­i­tions, es­pe­cially if the al­tern­at­ive is state-fun­ded re­tire­ment. In a di­min­ished eco­nomic fu­ture, one not as rich as it would have been with a more stable pop­u­la­tion pyr­amid, polit­ics seems likely to fo­cus on zero-sum games of rob­bing (young) Peter to pay (old) Paul, whether dir­ectly through higher tax­a­tion or in­dir­ectly by un­der-in­vest­ing in the fu­ture.

Am I jump­ing ahead of the prob­lem here? Do we not know enough about what it would take to ad­dress the dif­fer­ent classes of Global Cata­strophic and Ex­ist­en­tial Risk, or is there a reason to fo­cus now on the factors that could pre­vent us from ‘do­ing some­thing about it’?