The AISafety.com Reading Group discussed this article when it was published. My slides are here: https://www.dropbox.com/s/t0k6wn4q90emwf2/Takeoff_Speeds.pptx?dl=0
There is a recording of my presentation here: https://youtu.be/7ogJuXNmAIw
My notes from the discussion are reproduced below:
We liked the article quite a lot. There was a surprising number of new insights for an article purporting to just collect standard arguments.
The definition of fast takeoff seemed somewhat non-standard, conflating 3 things: Speed as measured in clock-time, continuity/smoothness around the threshold where AGI reaches human baseline, and locality. These 3 questions are closely related, but not identical, and some precision would be appreciated. In fairness, the article was posted on Paul Christianos “popular” blog, not his “formal” blog.
The degree to which we can build universal / general AIs right now was a point of contention. Our (limited) understanding is that most AI researchers would disagree with Paul Christianos about whether we can build a universal or general AI right now. Paul Christianos argument seem to rest on our ability to trade off universality against other factors, but if (as we believe) universality is still mysterious, this tradeoff is not possible.
There was some confusion about the relationship between “Universality” and “Generality”. Possibly, a “village idiot” is above the level of generality (passes Turing test, can make coffee) whereas he would not be at the “Universality” level (unable to self-improve to Superintelligence, even given infinite time). It is unclear if Paul Christiano would agree to this.
The comparison between Humans and Chimpanzees was discussed, and related to the argument from Human Variation, which seems to be stronger. The difference between a village idiot and Einstein is also large, and the counter-argument about what evolution cares about seem to not hold here.
Paul Christiano asked for a canonical example of a key insight enabling an unsolvable problem to be solved. An example would be my Matrix Multiplication example (https://youtu.be/5DDdBHsDI-Y). Here, a series of 4 key insights turn the problem from requiring a decade, to a year, to a day, to a second. While the example is not canonical nor precisely what Paul Christiano asks for, it does point to a way get intution about the “key insight”: Grab a paper and a pen, and try to do matrix multiplication faster than O(n^3). It is possible, but far from trivial.
For the deployment lag (“Sonic Boom”) argument, a factor that can complicate the tradeoff is “secrecy”. If deployment cause you to lose the advantages of secrecy, the tradeoffs described could look much worse.
A number of the arguments for a fast takeoff did seem to aggregate, in one specific way: If our prior is for a “quite fast” takeoff, the arguments push us towards expecting a “very fast” takeoff. This is my personal interpretation, and I have not really formalized it. I should get around to that some day.
Good luck with the in-person AI Safety reading group. It sounds productive and fun.
For the past 2 years, I have been running the Skype-based AISafety.com Reading Group. You can see the material we have covered at https://aisafety.com/reading-group/ . Yesterday, Vadim Kosoy from MIRI gave a great presentation of his Learning-Theoretic Agenda: https://youtu.be/6MkmeADXcZg
Usually, I try to post a summary of the discussion to our Facebook group, but I’ve been unable to get a follow-on discussion going. Your summary/idea above is higher quality than what I post.
Please tell me if you have any ideas for collaboration between our reading groups, or if I can do anything else to help you :).