I haven’t thought very much about takeoff speeds (if that wasn’t obvious!). But I don’t think it’s true that nobody thinks it will take more than a decade… Like, I don’t think Paul Christiano is the #1 slowest of all slow-takeoff advocates. Isn’t Robin Hanson slower? I forget.
Then a different question is “Regardless of what other people think about takeoff speeds, what’s the right answer, or at least what’s plausible?” I don’t know. A key part is: I’m hazy on when you “start the clock”. People were playing with neural networks in the 1990s but we only got GPT-3 in 2020. What were people doing all that time?? Well mostly, people were ignoring neural networks entirely, but they were also figuring out how to put them on GPUs, and making frameworks like TensorFlow and PyTorch and making them progressively easier to use and scale and parallelize, and finding all the tricks like BatchNorm and Xavier initialization and Transformers, and making better teaching materials and MOOCs to spread awareness of how these things work, developing new and better chips tailored to these algorithms (and vice-versa), waiting on Moore’s law, and on and on. I find it conceivable that we could get “glimmers of AGI” (in some relevant sense) in algorithms that have not yet jumped through all those hoops, so we’re stuck with kinda toy examples for quite a while as we develop the infrastructure to scale these algorithms, the bag of tricks to make them run better, the MOOCs, the ASICs, and so on. But I dunno.
Or maybe I am misunderstanding what you mean by accidents?
Yeah, sorry, when I said “accidents” I meant “the humans did something by accident”, not “the AI did something by accident”.
Thanks! Yeah, there are plenty of people who think takeoff will take more than a decade—but I guess I’ll just say, I’m pretty sure they are all wrong. :) But we should take care to define what the start point of takeoff is. Traditionally it was something like “When the AI itself is doing most of the AI research,” but I’m very willing to consider alternate definitions. I certainly agree it might take more than 10 years if we define things in such a way that takeoff has already begun.
Yeah, sorry, when I said “accidents” I meant “the humans did something by accident”, not “the AI did something by accident”.
Wait, uhoh, I didn’t mean “the AI did something by accident” either… can you elaborate? By “accident” I thought you meant something like “Small-scale disasters, betrayals, etc. caused by AI that are shocking enough to count as warning shots / fire alarms to at least some extent.”
Oh sorry, I misread what you wrote. Sure, maybe, I dunno. I just edited the article to say “some number of years”.
I never meant to make a claim “20 years is definitely in the realm of possibility” but rather to make a claim “even if it takes 20 years, that’s still not necessarily enough to declare that we’re all good”.
I never meant to make a claim “20 years is definitely in the realm of possibility” but rather to make a claim “even if it takes 20 years, that’s still not necessarily enough to declare that we’re all good”.
I haven’t thought very much about takeoff speeds (if that wasn’t obvious!). But I don’t think it’s true that nobody thinks it will take more than a decade… Like, I don’t think Paul Christiano is the #1 slowest of all slow-takeoff advocates. Isn’t Robin Hanson slower? I forget.
Then a different question is “Regardless of what other people think about takeoff speeds, what’s the right answer, or at least what’s plausible?” I don’t know. A key part is: I’m hazy on when you “start the clock”. People were playing with neural networks in the 1990s but we only got GPT-3 in 2020. What were people doing all that time?? Well mostly, people were ignoring neural networks entirely, but they were also figuring out how to put them on GPUs, and making frameworks like TensorFlow and PyTorch and making them progressively easier to use and scale and parallelize, and finding all the tricks like BatchNorm and Xavier initialization and Transformers, and making better teaching materials and MOOCs to spread awareness of how these things work, developing new and better chips tailored to these algorithms (and vice-versa), waiting on Moore’s law, and on and on. I find it conceivable that we could get “glimmers of AGI” (in some relevant sense) in algorithms that have not yet jumped through all those hoops, so we’re stuck with kinda toy examples for quite a while as we develop the infrastructure to scale these algorithms, the bag of tricks to make them run better, the MOOCs, the ASICs, and so on. But I dunno.
Yeah, sorry, when I said “accidents” I meant “the humans did something by accident”, not “the AI did something by accident”.
Thanks! Yeah, there are plenty of people who think takeoff will take more than a decade—but I guess I’ll just say, I’m pretty sure they are all wrong. :) But we should take care to define what the start point of takeoff is. Traditionally it was something like “When the AI itself is doing most of the AI research,” but I’m very willing to consider alternate definitions. I certainly agree it might take more than 10 years if we define things in such a way that takeoff has already begun.
Wait, uhoh, I didn’t mean “the AI did something by accident” either… can you elaborate? By “accident” I thought you meant something like “Small-scale disasters, betrayals, etc. caused by AI that are shocking enough to count as warning shots / fire alarms to at least some extent.”
Oh sorry, I misread what you wrote. Sure, maybe, I dunno. I just edited the article to say “some number of years”.
I never meant to make a claim “20 years is definitely in the realm of possibility” but rather to make a claim “even if it takes 20 years, that’s still not necessarily enough to declare that we’re all good”.
Ah, OK. We are on the same page then.