I don’t understand what state do you think other technologies are in from the moment Agent 3 appeared until everything got out of control? What about Agent-3 help with WBE? Or increasing the intelligence of adults to solve the alignment?
Why in the worlds where we already have powerful enough agents that haven’t yet gotten out of control, we see them working on self-improvement, but don’t see game-changing progress in other areas?
If you haven’t already, you should consider reading the Timelines Forecast and Takeoff Forecast research supplements linked to on the AI 2027 website. But I think there are a good half dozen (not necessarily independent) reasons for thinking that if AI capabilities start to takeoff in short timeline futures, other parts of the overall economy/society aren’t likely to massively change nearly as quickly.
The jagged capabilities frontier in AI that already exists and will likely increase, Moravec’s Paradox, the internal model/external model gap, the lack of compute available for experimentation + training + synthetic data creation + deployment, the gap in ease of obtaining training data for tasks like Whole Brain Emulation versus software development & AI Research, the fact that diffusion/use of publicly available model capabilities is relatively slow for both reasons of human psychology & economic efficiency, etc.
Basically, the fact that the most pivotal moments of AI 2027 are written as occurring mostly within 2027, rather than say across 2029-3034, means that it’s possible for substantial RSI in terms of AI capabilities before substantial transformations occur in society overall. I think the most likely way AI 2027 is wrong on this matter is that not nearly as fast of an “intelligence explosion” occurs, not that the speed of societal impacts that occur simultaneously is underestimated. The reasons for thinking this are basically taking scaling seriously & priors (which are informed by things like the industrial revolution).
I don’t understand what state do you think other technologies are in from the moment Agent 3 appeared until everything got out of control? What about Agent-3 help with WBE? Or increasing the intelligence of adults to solve the alignment?
Why in the worlds where we already have powerful enough agents that haven’t yet gotten out of control, we see them working on self-improvement, but don’t see game-changing progress in other areas?
If you haven’t already, you should consider reading the Timelines Forecast and Takeoff Forecast research supplements linked to on the AI 2027 website. But I think there are a good half dozen (not necessarily independent) reasons for thinking that if AI capabilities start to takeoff in short timeline futures, other parts of the overall economy/society aren’t likely to massively change nearly as quickly.
The jagged capabilities frontier in AI that already exists and will likely increase, Moravec’s Paradox, the internal model/external model gap, the lack of compute available for experimentation + training + synthetic data creation + deployment, the gap in ease of obtaining training data for tasks like Whole Brain Emulation versus software development & AI Research, the fact that diffusion/use of publicly available model capabilities is relatively slow for both reasons of human psychology & economic efficiency, etc.
Basically, the fact that the most pivotal moments of AI 2027 are written as occurring mostly within 2027, rather than say across 2029-3034, means that it’s possible for substantial RSI in terms of AI capabilities before substantial transformations occur in society overall. I think the most likely way AI 2027 is wrong on this matter is that not nearly as fast of an “intelligence explosion” occurs, not that the speed of societal impacts that occur simultaneously is underestimated. The reasons for thinking this are basically taking scaling seriously & priors (which are informed by things like the industrial revolution).