Some disjunctive reasons for urgency on AI risk

(This has been sit­ting in my drafts folder since Au­gust 2017. Robin Han­son’s re­cent How Lumpy AI Ser­vices? made me think of it again. I’m not sure why I didn’t post it back then. I may have wanted to add more rea­sons, de­tails and/​or cita­tions, but at this point it seems bet­ter to just post it as is. Apolo­gies to those who may have come up with some of these ar­gu­ments ear­lier.)

Robin Han­son re­cently wrote, “Re­cently AI risk has be­come some­thing of an in­dus­try, with far more go­ing on than I can keep track of. Many call work­ing on it one of the most effec­tively al­tru­is­tic things one can pos­si­bly do. But I’ve searched a bit and as far as I can tell that foom sce­nario is still the main rea­son for so­ciety to be con­cerned about AI risk now.” (By “foom sce­nario” he means a lo­cal in­tel­li­gence ex­plo­sion where a sin­gle AI takes over the world.) In re­sponse, I list the fol­low­ing ad­di­tional rea­sons to work ur­gently on AI al­ign­ment.

  1. Prop­erty rights are likely to not hold up in the face of large ca­pa­bil­ity differ­en­tials be­tween hu­mans and AIs, so even if the in­tel­li­gence ex­plo­sion is likely global as op­posed to lo­cal, that doesn’t much re­duce the ur­gency of work­ing on AI al­ign­ment.

  2. Mak­ing sure an AI has al­igned val­ues and strong con­trols against value drift is an ex­tra con­straint on the AI de­sign pro­cess. This con­straint ap­pears likely to be very costly at both de­sign and run time, so if the first hu­man level AIs de­ployed aren’t value al­igned, it seems very difficult for al­igned AIs to catch up and be­come com­pet­i­tive.

  3. AIs’ con­trol of the econ­omy will grow over time. This may hap­pen slowly in their time frame but quickly in ours, leav­ing lit­tle time to solve value al­ign­ment prob­lems be­fore hu­man val­ues are left with a very small share of the uni­verse, even if prop­erty rights hold up.

  4. Once we have hu­man-level AIs and it’s re­ally ob­vi­ous that value al­ign­ment is difficult, su­per­in­tel­li­gent AIs may not be far be­hind. Su­per­in­tel­li­gent AIs can prob­a­bly find ways to bend peo­ple’s be­liefs and val­ues to their benefit (e.g., cre­ate highly effec­tive forms of pro­pa­ganda, cults, philo­soph­i­cal ar­gu­ments, and the like). Without an equally ca­pa­ble, value-al­igned AI to pro­tect me, even if my prop­erty rights are tech­ni­cally se­cure, I don’t know how I would se­cure my mind.