[link] New essay summarizing some of my latest thoughts on AI safety

New es­say sum­ma­riz­ing some of my lat­est thoughts on AI safety, ~3500 words. I ex­plain why I think that some of the thought ex­per­i­ments that have pre­vi­ously been used to illus­trate the dan­gers of AI are flawed and should be used very cau­tiously, why I’m less wor­ried about the dan­gers of AI than I used to be, and what are some of the re­main­ing rea­sons for why I do con­tinue to be some­what wor­ried.

http://​​ka­j­so­tala.fi/​​2015/​​10/​​mav­er­ick-nan­nies-and-dan­ger-the­ses/​​

Back­cover celebrity en­dorse­ment: “Thanks, Kaj, for a very nice write-up. It feels good to be dis­cussing ac­tu­ally mean­ingful is­sues re­gard­ing AI safety. This is a big con­trast to dis­cus­sions I’ve had in the past with MIRI folks on AI safety, wherein they have gen­er­ally tried to di­rect the con­ver­sa­tion to­ward bizarre, pointless ir­rele­van­cies like “the val­ues that would be held by a ran­domly se­lected mind”, or “AIs with su­per­hu­man in­tel­li­gence mak­ing re­tarded judg­ments” (like tiling the uni­verse with pa­per­clips to make hu­mans happy), and so forth.… Now OTOH, we are ac­tu­ally dis­cussing things of some po­ten­tial prac­ti­cal mean­ing ;p …”—Ben Goertzel