I think that, at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability—“AI go FOOM.” Just to be clear on the claim, “fast” means on a timescale of weeks or hours rather than years or decades; and “FOOM” means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology.
So yeah, a few years does seem a ton slower than what he was talking about, at least here.
Here’s Scott Alexander, who describes hard takeoff as a one-month thing:
If AI saunters lazily from infrahuman to human to superhuman, then we’ll probably end up with a lot of more-or-less equally advanced AIs that we can tweak and fine-tune until they cooperate well with us. In this situation, we have to worry about who controls those AIs, and it is here that OpenAI’s model [open sourcing AI] makes the most sense.
But Bostrom et al worry that AI won’t work like this at all. Instead there could be a “hard takeoff”, a subjective discontinuity in the function mapping AI research progress to intelligence as measured in ability-to-get-things-done. If on January 1 you have a toy AI as smart as a cow, and on February 1 it’s proved the Riemann hypothesis and started building a ring around the sun, that was a hard takeoff.
In general, I think, people who just entered the conversation recently really seem to me to miss how fast people were actually talking about.
Here’s Yudkowsky, in the Hanson-Yudkowsky debate:
So yeah, a few years does seem a ton slower than what he was talking about, at least here.
Here’s Scott Alexander, who describes hard takeoff as a one-month thing:
In general, I think, people who just entered the conversation recently really seem to me to miss how fast people were actually talking about.