A model of AI development

FHI has re­leased a new tech re­port:

Arm­strong, Bostrom, and Shul­man. Rac­ing to the Precipice: a Model of Ar­tifi­cial In­tel­li­gence Devel­op­ment.

Ab­stract:

This pa­per pre­sents a sim­ple model of an AI arms race, where sev­eral de­vel­op­ment teams race to build the first AI. Un­der the as­sump­tion that the first AI will be very pow­er­ful and trans­for­ma­tive, each team is in­cen­tivized to finish first — by skimp­ing on safety pre­cau­tions if need be. This pa­per pre­sents the Nash equil­ibrium of this pro­cess, where each team takes the cor­rect amount of safety pre­cau­tions in the arms race. Hav­ing ex­tra de­vel­op­ment teams and ex­tra en­mity be­tween teams can in­crease the dan­ger of an AI-dis­aster, es­pe­cially if risk tak­ing is more im­por­tant than skill in de­vel­op­ing the AI. Sur­pris­ingly, in­for­ma­tion also in­creases the risks: the more teams know about each oth­ers’ ca­pa­bil­ities (and about their own), the more the dan­ger in­creases.

The pa­per is short and read­able; dis­cuss it here!

But my main rea­son for post­ing is to ask this ques­tion: What is the most similar work that you know of? I’d ex­pect peo­ple to do this kind of thing for mod­el­ing nu­clear se­cu­rity risks, and maybe other things, but I don’t hap­pen to know of other analy­ses like this.