A long reply to Ben Garfinkel on Scrutinizing Classic AI Risk Arguments

In July, Ben Garfinkel scrutinized the classic AI Risk arguments in a 158 minute-long interview with 80000 hours, which I strongly recommend.

I have formulated a reply, and recorded 80 minutes of video, as part of two presentations in the AISafety.com Reading Group:

196. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments

197. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments 2

I strongly recommend turning subtitles on. Also consider increasing the playback speed.


“I have made this longer than usual because I have not had time to make it shorter.”
-Blaise Pascal

The Podcast/​Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:

  1. Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben’s sentences correctly.

  2. Lack of references. The “Classic AI Risk Arguments” are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.

  3. Expansiveness. There are a lot of arguments presented, and many of them deserve formal answers. Unfortunately, this is a large task, and I hope you will forgive me for replying in the form of a video.

tl;dw: A number of the arguments Ben Garfinkel criticize are in fact not present in “Superintelligence” and “The AI Foom Debate”. (This summary is incomplete.)