I have thought of that “Village Idiot and Einstein” claim as the most obvious example of a way that Eliezer and co were super wrong about how AI would go, and they’ve AFAIK totally failed to publicly reckon with it as it’s become increasingly obvious that they were wrong over the last eight years
I’m confused—what evidence do you mean? As I understood it, the point of the village idiot/Einstein post was that the size of the relative differences in intelligence we were familiar with—e.g., between humans, or between humans and other organisms—tells us little about the absolute size possible in principle. Has some recent evidence updated you about that, or did you interpret the post as making a different point?
(To be clear I also feel confused by Eliezer’s tweet, for the same reason).
Ugh, I think you’re totally right and I was being sloppy; I totally unreasonably interpreted Eliezer as saying that he was wrong about how long/how hard/how expensive it would be to get between capability levels. (But maybe Eliezer misinterpreted himself the same way? His subsequent tweets are consistent with this interpretation.)
I totally agree with Eliezer’s point in that post, though I do wish that he had been clearer about what exactly he was saying.
Makes sense. But on this question too I’m confused—has some evidence in the last 8 years updated you about the old takeoff speed debates? Or are you referring to claims Eliezer made about pre-takeoff rates of progress? From what I recall, the takeoff debates were mostly focused on the rate of progress we’d see given AI much more advanced than anything we have. For example, Paul Christiano operationalized slow takeoff like so:
Given that we have yet to see any such doublings, nor even any discernable impact on world GDP:
… it seems to me that takeoff (in this sense, at least) has not yet started, and hence that we have not yet had much chance to observe evidence that it will be slow?
The common theme here is that the capabilities frontier is more jagged than expected. So the way in which people modeled takeoff in the pre-LLM era was too simplistic.
Takeoff used to be seen as equivalent to the time between AGI and ASI.
In reality we got programmes which are not AGI, but do have capabilities that most in the past would have assumed to entail AGI.
So, we have pretty-general intelligence that’s better than most humans in some areas, and is amplifying programming and mathematics productivity. So, I think takeoff has begun, but it’s under quite different conditions than people used to model.
So, I think takeoff has begun, but it’s under quite different conditions than people used to model.
I don’t think they are quite different. Christiano’s argument was largely about the societal impact, i.e. that transformative AI would arrive in an already-pretty-transformed world:
I believe that before we have incredibly powerful AI, we will have AI which is merely very powerful. This won’t be enough to create 100% GDP growth, but it will be enough to lead to (say) 50% GDP growth. I think the likely gap between these events is years rather than months or decades.
In particular, this means that incredibly powerful AI will emerge in a world where crazy stuff is already happening (and probably everyone is already freaking out). If true, I think it’s an important fact about the strategic situation.
I claim the world is clearly not yet pretty-transformed, in this sense. So insofar as you think takeoff has already begun, or expect short (e.g. AI 2027-ish) timelines—I personally expect neither, to be clear—I do think this takeoff is centrally of the sort Christiano would call “fast.”
I think you accurately interpreted me as saying I was wrong about how long it would take to get from the “apparently a village idiot” level to “apparently Einstein” level! I hadn’t thought either of us were talking about the vastness of the space above, in re what I was mistaken about. You do not need to walk anything back afaict!
Have you stated anywhere what makes you think “apparently a village idiot” is a sensible description of current learning programs, as they inform us regarding the question of whether or not we currently have something that is capable via generators sufficiently similar to [the generators of humanity’s world-affecting capability] that we can reasonably induce that these systems are somewhat likely to kill everyone soon?
I’m confused—what evidence do you mean? As I understood it, the point of the village idiot/Einstein post was that the size of the relative differences in intelligence we were familiar with—e.g., between humans, or between humans and other organisms—tells us little about the absolute size possible in principle. Has some recent evidence updated you about that, or did you interpret the post as making a different point?
(To be clear I also feel confused by Eliezer’s tweet, for the same reason).
Ugh, I think you’re totally right and I was being sloppy; I totally unreasonably interpreted Eliezer as saying that he was wrong about how long/how hard/how expensive it would be to get between capability levels. (But maybe Eliezer misinterpreted himself the same way? His subsequent tweets are consistent with this interpretation.)
I totally agree with Eliezer’s point in that post, though I do wish that he had been clearer about what exactly he was saying.
Makes sense. But on this question too I’m confused—has some evidence in the last 8 years updated you about the old takeoff speed debates? Or are you referring to claims Eliezer made about pre-takeoff rates of progress? From what I recall, the takeoff debates were mostly focused on the rate of progress we’d see given AI much more advanced than anything we have. For example, Paul Christiano operationalized slow takeoff like so:
Given that we have yet to see any such doublings, nor even any discernable impact on world GDP:
… it seems to me that takeoff (in this sense, at least) has not yet started, and hence that we have not yet had much chance to observe evidence that it will be slow?
The common theme here is that the capabilities frontier is more jagged than expected. So the way in which people modeled takeoff in the pre-LLM era was too simplistic.
Takeoff used to be seen as equivalent to the time between AGI and ASI.
In reality we got programmes which are not AGI, but do have capabilities that most in the past would have assumed to entail AGI.
So, we have pretty-general intelligence that’s better than most humans in some areas, and is amplifying programming and mathematics productivity. So, I think takeoff has begun, but it’s under quite different conditions than people used to model.
I don’t think they are quite different. Christiano’s argument was largely about the societal impact, i.e. that transformative AI would arrive in an already-pretty-transformed world:
I claim the world is clearly not yet pretty-transformed, in this sense. So insofar as you think takeoff has already begun, or expect short (e.g. AI 2027-ish) timelines—I personally expect neither, to be clear—I do think this takeoff is centrally of the sort Christiano would call “fast.”
I think you accurately interpreted me as saying I was wrong about how long it would take to get from the “apparently a village idiot” level to “apparently Einstein” level! I hadn’t thought either of us were talking about the vastness of the space above, in re what I was mistaken about. You do not need to walk anything back afaict!
Have you stated anywhere what makes you think “apparently a village idiot” is a sensible description of current learning programs, as they inform us regarding the question of whether or not we currently have something that is capable via generators sufficiently similar to [the generators of humanity’s world-affecting capability] that we can reasonably induce that these systems are somewhat likely to kill everyone soon?