Value Deathism

Ben Go­ertzel:

I doubt hu­man value is par­tic­u­larly frag­ile. Hu­man value has evolved and mor­phed over time and will con­tinue to do so. It already takes mul­ti­ple differ­ent forms. It will likely evolve in fu­ture in co­or­di­na­tion with AGI and other tech­nol­ogy. I think it’s fairly ro­bust.

Robin Han­son:

Like Ben, I think it is ok (if not ideal) if our de­scen­dants’ val­ues de­vi­ate from ours, as ours have from our an­ces­tors. The risks of at­tempt­ing a world gov­ern­ment any­time soon to pre­vent this out­come seem worse over­all.

We all know the prob­lem with death­ism: a strong be­lief that death is al­most im­pos­si­ble to avoid, clash­ing with un­de­sir­a­bil­ity of the out­come, leads peo­ple to ra­tio­nal­ize ei­ther the illu­sory na­ture of death (af­ter­life memes), or de­sir­a­bil­ity of death (death­ism proper). But of course the claims are sep­a­rate, and shouldn’t in­fluence each other.

Change in val­ues of the fu­ture agents, how­ever sud­den of grad­ual, means that the Fu­ture (the whole freackin’ Fu­ture!) won’t be op­ti­mized ac­cord­ing to our val­ues, won’t be any­where as good as it could’ve been oth­er­wise. It’s eas­ier to see a sud­den change as morally rele­vant, and eas­ier to ra­tio­nal­ize grad­ual de­vel­op­ment as morally “busi­ness as usual”, but if we look at the end re­sult, the risks of value drift are the same. And it is difficult to make it so that the fu­ture is op­ti­mized: to stop un­con­trol­led “evolu­tion” of value (value drift) or re­cover more of as­tro­nom­i­cal waste.

Re­gard­less of difficulty of the challenge, it’s NOT OK to lose the Fu­ture. The loss might prove im­pos­si­ble to avert, but still it’s not OK, the value judg­ment cares not for fea­si­bil­ity of its de­sire. Let’s not suc­cumb to the death­ist pat­tern and lose the bat­tle be­fore it’s done. Have the courage and ra­tio­nal­ity to ad­mit that the loss is real, even if it’s too great for mere hu­man emo­tions to ex­press.