I do think that this sort of “AIs generate their own training environments” flywheel could cause superexponential progress via the same sort of mechanism as AIs automating AI R&D, though I don’t expect to see this data generation effect show up much in overall AI progress.
Confused by this. “AIs generate their own training environments” is (a central part of) AI R&D, right? (Besides this there’s what, compute optimisation of various kinds, which shades into architecture and training algo and hyperparameter innovation/exploration, plus scaffolding, prompting/input design, and tooling/integrations? I’d weight data+environment as ~a third of AI R&D.)
Separately, how to square ‘...could cause superexponential...’ with ‘I don’t expect… to show up much’? Is it just that you think it’s conceivable but low probability?
Confused by this. “AIs generate their own training environments” is (a central part of) AI R&D, right? (Besides this there’s what, compute optimisation of various kinds, which shades into architecture and training algo and hyperparameter innovation/exploration, plus scaffolding, prompting/input design, and tooling/integrations? I’d weight data+environment as ~a third of AI R&D.)
Separately, how to square ‘...could cause superexponential...’ with ‘I don’t expect… to show up much’? Is it just that you think it’s conceivable but low probability?