You need expertise sufficient to choose and formulate the puzzles, not yet sufficient to solve them, and this generation-verification gap keeps moving the frontier of understanding forward, step by step, but potentially indefinitely.
Seems plausible. I note that
That world is bottlenecked on compute resources you can pour into training, particularly if AIs remain much less sample efficient than humans when learning new tasks.
Training up the first AI on a skill by doing the generation-verification-gap-shuffle is much more expensive than training up later AIs once you can cheaply run inference on an AI that already has the skill, and training a later AI to delegate to one specialized in this skill might be cheaper still.
This world still sees an explosion of recursively AI capabilities, but those capability gains are not localized to a single AI agent
Seems plausible. I note that
That world is bottlenecked on compute resources you can pour into training, particularly if AIs remain much less sample efficient than humans when learning new tasks.
Training up the first AI on a skill by doing the generation-verification-gap-shuffle is much more expensive than training up later AIs once you can cheaply run inference on an AI that already has the skill, and training a later AI to delegate to one specialized in this skill might be cheaper still.
This world still sees an explosion of recursively AI capabilities, but those capability gains are not localized to a single AI agent