Will AI See Sudden Progress?

Will ad­vanced AI let some small group of peo­ple or AI sys­tems take over the world?

AI X-risk folks and oth­ers have ac­crued lots of ar­gu­ments about this over the years, but I think this de­bate has been dis­ap­point­ing in terms of any­one chang­ing any­one else’s mind, or much be­ing re­solved. I still have hopes for sort­ing this out though, and I thought a writ­ten sum­mary of the ev­i­dence we have so far (which of­ten seems to live in per­sonal con­ver­sa­tions) would be a good start, for me at least.

To that end, I started a col­lec­tion of rea­sons to ex­pect dis­con­tin­u­ous progress near the de­vel­op­ment of AGI.

I do think the world could be taken over with­out a step change in any­thing, but it seems less likely, and we can talk about the ar­gu­ments around that an­other time.

Paul Chris­ti­ano had ba­si­cally the same idea at the same time, so for a slightly differ­ent take, here is his ac­count of rea­sons to ex­pect slow or fast take-off.

Please tell us in the com­ments or feed­back box if your fa­vorite ar­gu­ment for AI Foom is miss­ing, or isn’t rep­re­sented well. Or if you want to rep­re­sent it well your­self in the form of a short es­say, and send it to me here, and we will gladly con­sider post­ing it as a guest blog post.

I’m also pretty cu­ri­ous to hear which ar­gu­ments peo­ple ac­tu­ally find com­pel­ling, even if they are already listed. I don’t ac­tu­ally find any of the ones I have that com­pel­ling yet, and I think a lot of peo­ple who have thought about it do ex­pect ‘lo­cal take­off’ with at least sub­stan­tial prob­a­bil­ity, so I am prob­a­bly miss­ing things.


Cross­posted from AI Im­pacts.