Population Aging as an Impediment to Addressing Global Catastrophic Risks

[ epistemic sta­tus: first Less Wrong post, de­vel­op­ing hy­poth­e­sis, seek­ing feed­back and help flesh­ing out the hy­poth­e­sis into some­thing that could be re­searched and about which a dis­cus­sion pa­per can be writ­ten. A com­ment/​con­tri­bu­tion to Eliezer Yud­kowsky’s “Cog­ni­tive bi­ases po­ten­tially af­fect­ing judg­ment of global risks” in Bostrom & Cirkovic’s “Global Catas­trophic Risks” (2008) ]

Most of the Global Catas­trophic Risks we face in the 21st cen­tury, like an­thro­pogenic cli­mate change, comet and as­ter­oid im­pacts, pan­demics, and un­con­trol­led ar­tifi­cial in­tel­li­gence, are high im­pact (af­fect­ing the ma­jor­ity or all of hu­man­ity), of ter­mi­nal in­ten­sity (pro­duc­ing mass death, eco­nomic and so­cial dis­rup­tion, and in some cases po­ten­tial hu­man ex­tinc­tion), and are of highly un­cer­tain prob­a­bil­ity [1]. This last fac­tor is one ma­jor fac­tor mak­ing it difficult to bring pub­lic at­ten­tion and poli­ti­cal will to bear on miti­gat­ing them. This is crit­i­cal as all of our work/​re­search on AI safety and other is­sues will be for naught if there is no un­der­stand­ing or will to im­ple­ment them. Im­ple­men­ta­tion may not re­quire pub­lic in­volve­ment in some cases (AI safety may be man­age­able by con­sen­sus be­tween AI re­searchers, for ex­am­ple) but oth­ers, like the de­tec­tion of Earth or­bit cross­ing as­ter­oids and comets, may re­quire sig­nifi­cant ex­pen­di­ture to build de­tec­tors, etc.

My in­ter­est at pre­sent is in ad­di­tional fac­tors that make mus­ter­ing poli­ti­cal and pub­lic will even more difficult—given that these are hard prob­lems to in­ter­est peo­ple in in the first place, what fac­tors make that even more difficult? I be­lieve that the ag­ing of pop­u­la­tions in the de­vel­oped world may be a crit­i­cal fac­tor, pro­gres­sively redi­rect­ing so­cietal re­sources from long-term pro­jects like ad­vanced in­fras­truc­ture, or foun­da­tional ba­sic sci­ence re­search (which ar­guably AI Safety counts as), to­wards pro­vi­sion of health care and pen­sions.

Sev­eral fac­tors make an ag­ing de­vel­oped world pop­u­la­tion a fac­tor in blunt­ing long-term plan­ning:

(1) Older peo­ple (age 65+), across the de­vel­oped world, vote more of­ten than younger people

(2) Vot­ers are more read­ily mo­bi­lized to vote to pro­tect en­ti­tle­ments than to make in­vest­ments for the future

(3) Older vot­ers have ac­cess to, and are more aware of, en­ti­tle­ments than are younger people

(4) Ex­pand­ing on (3), Benefits and en­ti­tle­ments are of par­tic­u­larly high salience to the aged be­cause of their failure to save ad­e­quately for re­tire­ment. This trend has been on­go­ing and seems un­likely to be due to cog­ni­tive bi­ases sur­round­ing fu­ture plan­ning.

(6) Long term in­vest­ments, re­search, and other pro­tec­tions/​miti­ga­tions against Global Catas­trophic Risks will re­quire a trade­off with pro­vid­ing benefits to pre­sent people

(7) Older peo­ple have more pre­sent fo­cus and less fu­ture fo­cus than younger peo­ple (to the ex­tent that younger peo­ple do—my anec­do­tal data is that most peo­ple in­ter­ested in the far fu­ture of hu­man­ity are <50 years old, and a small sub­set of that <50 year old pop­u­la­tion). Strangely, even peo­ple with grand­chil­dren and great-grand­chil­dren ex­press limited in­ter­ested in how their de­scen­dants will live and how safe their fu­tures will be.

#6 is the point on which I am most un­cer­tain (though I wel­come ques­tions and challenges that I should be more un­cer­tain). Un­less ar­tifi­cial in­tel­li­gence and au­toma­tion in the near term (15-30 years) provide re­ally sub­stan­tial eco­nomic benefits, enough that ad­e­quate Global Catas­trophic Risk miti­ga­tion could be req­ui­si­tioned with­out ev­ery­one notic­ing too much (and even then it may be a hard sell), it seems likely that fu­ture eco­nomic growth will be slower. Older work­ers, on av­er­age (my hunch says …) are harder to re­train, and harder to mo­ti­vate to re­train to take new po­si­tions, es­pe­cially if the al­ter­na­tive is state-funded re­tire­ment. In a diminished eco­nomic fu­ture, one not as rich as it would have been with a more sta­ble pop­u­la­tion pyra­mid, poli­tics seems likely to fo­cus on zero-sum games of rob­bing (young) Peter to pay (old) Paul, whether di­rectly through higher tax­a­tion or in­di­rectly by un­der-in­vest­ing in the fu­ture.

Am I jump­ing ahead of the prob­lem here? Do we not know enough about what it would take to ad­dress the differ­ent classes of Global Catas­trophic and Ex­is­ten­tial Risk, or is there a rea­son to fo­cus now on the fac­tors that could pre­vent us from ‘do­ing some­thing about it’?