If you extrapolate capability graphs in the most straightforward way, you get the result that AGI should arrive around 2027-2028. Scenario analyses (like the ones produced by Kokotajlo and Aschenbrenner) tend to converge on the same result.
If you extrapolate log GDP growth or the value of the S&P 500, superintelligence would not be anticipated any time soon. If you extrapolate then number of open mathematical theorems proved by LLMs you get ~a constant at 0. You have to decide which straight line you expect to stay straight—what Aschenbrenner did is not objective, and I don’t know about Kokotajlo but I doubt it was meaningfully independent.
We mostly solved egg frying and laundry folding last year with Aloha and Optimus, which were some of the most long-standing issues in robotics. So human level robots in 2024 would actually have been an okay prediction. Actual human level probably requires human level intelligence, so 2027.
Interesting, link?
This reasoning feels a little motivated though—I think it would be obvious if we had human(-laborer)-level robots because they’d be walking around doing stuff. I’ve worked in robotics research a little bit and I can tell you that setting up a demo for an isolated task is VERY different from selling a product that can do it, let alone one product that can seamlessly transition between many tasks.
Very interesting, thanks! On a quick skim, I don’t think I agree with the claim that LLMs have never done anything important. I know for a fact that they have written a lot of production code for a lot of companies, for example. And I personally have read AI texts funny or entertaining enough to reflect back on, and AI art beautiful enough to admire even a year later. (All of this is highly subjective, of course. I don’t think you’d find the same examples impressive.) If you don’t think any of that qualifies as important, then I think your definition of important may be overly broad.
But I’ll have to look at this more deeply later.
If you extrapolate log GDP growth or the value of the S&P 500, superintelligence would not be anticipated any time soon. If you extrapolate then number of open mathematical theorems proved by LLMs you get ~a constant at 0. You have to decide which straight line you expect to stay straight—what Aschenbrenner did is not objective, and I don’t know about Kokotajlo but I doubt it was meaningfully independent.
I think this reasoning would also lead one to reject Moore’s law as a valid way to forecast future compute prices. It is in some sense “obvious” what straight lines one should be looking at: smooth lines of technological progress. I claim that you can pick just about any capability with a sufficiently “smooth”, “continuous” definition (i.e. your example of the number of open mathematical theorems solved would have to be amended to allow for partial progress and partial solutions) will tend to converge around 2027-28. Some converge earlier, some later, but that seems to be around the consensus for when we can expect human-level capability for nearly all tasks anybody’s bothered to model.
The front page has a video of the system autonomously cooking a shrimp and other examples. It is still quite slow and clumsy, but being able to complete tasks like this at all is already light years ahead of where we were just a few years ago.
I’ve worked in robotics research a little bit and I can tell you that setting up a demo for an isolated task is VERY different from selling a product that can do it, let alone one product that can seamlessly transition between many tasks.
Oh, I know. It’s normally 5-20 years from lab to home. My 2027 prediction is for a research robot being able to do anything a human can do in an ordinary environment, not necessarily a mass-producable, inexpensive product for consumers or even most businesses. But obviously the advent of superintelligence, under my model, is going to accelerate those usual 5-20 year timelines quite a bit, so it can’t be much after 2027 that you’ll be able to buy your own android. Assuming “buying things” is still a thing, assuming the world remains recognizable for at least some years, and so on.
Oh, I know. It’s normally 5-20 years from lab to home. My 2027 prediction is for a research robot being able to do anything a human can do in an ordinary environment, not necessarily a mass-producable, inexpensive product for consumers or even most businesses. But obviously the advent of superintelligence, under my model, is going to accelerate those usual 5-20 year timelines quite a bit, so it can’t be much after 2027 that you’ll be able to buy your own android. Assuming “buying things” is still a thing, assuming the world remains recognizable for at least some years, and so on.
Okay, at this point perhaps we can just put some (fake) money on the line. Here are some example markets where we can provide each other liquidity, please feel free to suggest others:
I’m not actually relying on a heuristic, I’m compressing https://www.lesswrong.com/posts/vvgND6aLjuDR6QzDF/my-model-of-what-is-going-on-with-llms
If you extrapolate log GDP growth or the value of the S&P 500, superintelligence would not be anticipated any time soon. If you extrapolate then number of open mathematical theorems proved by LLMs you get ~a constant at 0. You have to decide which straight line you expect to stay straight—what Aschenbrenner did is not objective, and I don’t know about Kokotajlo but I doubt it was meaningfully independent.
Interesting, link?
This reasoning feels a little motivated though—I think it would be obvious if we had human(-laborer)-level robots because they’d be walking around doing stuff. I’ve worked in robotics research a little bit and I can tell you that setting up a demo for an isolated task is VERY different from selling a product that can do it, let alone one product that can seamlessly transition between many tasks.
Very interesting, thanks! On a quick skim, I don’t think I agree with the claim that LLMs have never done anything important. I know for a fact that they have written a lot of production code for a lot of companies, for example. And I personally have read AI texts funny or entertaining enough to reflect back on, and AI art beautiful enough to admire even a year later. (All of this is highly subjective, of course. I don’t think you’d find the same examples impressive.) If you don’t think any of that qualifies as important, then I think your definition of important may be overly broad.
But I’ll have to look at this more deeply later.
I think this reasoning would also lead one to reject Moore’s law as a valid way to forecast future compute prices. It is in some sense “obvious” what straight lines one should be looking at: smooth lines of technological progress. I claim that you can pick just about any capability with a sufficiently “smooth”, “continuous” definition (i.e. your example of the number of open mathematical theorems solved would have to be amended to allow for partial progress and partial solutions) will tend to converge around 2027-28. Some converge earlier, some later, but that seems to be around the consensus for when we can expect human-level capability for nearly all tasks anybody’s bothered to model.
The Mobile Aloha website: https://mobile-aloha.github.io/
The front page has a video of the system autonomously cooking a shrimp and other examples. It is still quite slow and clumsy, but being able to complete tasks like this at all is already light years ahead of where we were just a few years ago.
Oh, I know. It’s normally 5-20 years from lab to home. My 2027 prediction is for a research robot being able to do anything a human can do in an ordinary environment, not necessarily a mass-producable, inexpensive product for consumers or even most businesses. But obviously the advent of superintelligence, under my model, is going to accelerate those usual 5-20 year timelines quite a bit, so it can’t be much after 2027 that you’ll be able to buy your own android. Assuming “buying things” is still a thing, assuming the world remains recognizable for at least some years, and so on.
Okay, at this point perhaps we can just put some (fake) money on the line. Here are some example markets where we can provide each other liquidity, please feel free to suggest others: