to make a superintelligence in today’s age, there are roughly two kinds of strategies:
human-directed development
ai-directed development
ai-directed development feels more meaningful than it used to. not only can models now produce tons of useful synthetic data to train future models, but also, reasoning models can reason quite well about the next strategic steps in AI capabilities development / research itself.
which means, you could very soon:
set a reasoning model up in a codebase
have the reasoning model identify ways which it could become more capable
attempt those strategies (either through recursive code modification, sharing research reports with capable humans, etc)
get feedback on how those strategies went
iterate
is this recursive self-improvement process only bottlenecked by the quality of the reasoning model?
to make a superintelligence in today’s age, there are roughly two kinds of strategies:
human-directed development
ai-directed development
ai-directed development feels more meaningful than it used to. not only can models now produce tons of useful synthetic data to train future models, but also, reasoning models can reason quite well about the next strategic steps in AI capabilities development / research itself.
which means, you could very soon:
set a reasoning model up in a codebase
have the reasoning model identify ways which it could become more capable
attempt those strategies (either through recursive code modification, sharing research reports with capable humans, etc)
get feedback on how those strategies went
iterate
is this recursive self-improvement process only bottlenecked by the quality of the reasoning model?