oint, a major step change will also happen when AI is capable of generating new major scientific breakthroughs on its own—more akin to Einstein coming up with relativity to explain known data than
The acceleration of AI R&D will begin sooner than that, I think. We could get 10x speedup just by automating the typical openai engineer I think.
I broadly agree. I think AI tools are already speeding up development today, and on reflection I don’t actually think AI being more capable than humans at modeling the natural world would be a discontinuous point on the ramp up to superintelligence, actually.
It would be a point where AI gets much harder to predict, though, which is probably why it was on my mind when I was trying to come up with predictions.
The acceleration of AI R&D will begin sooner than that, I think. We could get 10x speedup just by automating the typical openai engineer I think.
I broadly agree. I think AI tools are already speeding up development today, and on reflection I don’t actually think AI being more capable than humans at modeling the natural world would be a discontinuous point on the ramp up to superintelligence, actually.
It would be a point where AI gets much harder to predict, though, which is probably why it was on my mind when I was trying to come up with predictions.
And OpenAI has explicitly said this is what they want to do! Their Superalignment strat looks suspiciously like “gunning for RSI”.