Nope, I just misread. Over on ACX I saw that Scott had left a comment
Our scenario’s changes are partly due to change in intelligence, but also partly to change in agency/time horizon/planning, and partly serial speed. Data efficiency comes later, downstream of the intelligence explosion.
I hadn’t remembered reading that in the post Still “things get crazy before models get data-efficient” does sound like the sort of thing which could plausibly fit with the world model in the post (but would be understated if so). Then I re-skimmed the post, and in the October 2027 section I saw
The gap between human and AI learning efficiency is rapidly decreasing.
Agent-3, having excellent knowledge of both the human brain and modern AI algorithms, as well as many thousands of copies doing research, ends up making substantial algorithmic strides, narrowing the gap to an agent that’s only around 4,000x less compute-efficient than the human brain
and when I read that my brain silently did a s/compute-efficient/data-efficient.
Though now I am curious about the authors’ views on how data efficiency will advance over the next 5 years, because that seems very world-model-relevant.
We are indeed imagining that they begin 2027 only about as data-efficient as they are today, but then improve significantly over the course of 2027 reaching superhuman data-efficiency by the end. We originally were going to write “data-efficiency” in that footnote but had trouble deciding on a good definition of it, so we went with compute-efficiency instead.
Nope, I just misread. Over on ACX I saw that Scott had left a comment
I hadn’t remembered reading that in the post Still “things get crazy before models get data-efficient” does sound like the sort of thing which could plausibly fit with the world model in the post (but would be understated if so). Then I re-skimmed the post, and in the October 2027 section I saw
and when I read that my brain silently did a
s/compute-efficient/data-efficient.Though now I am curious about the authors’ views on how data efficiency will advance over the next 5 years, because that seems very world-model-relevant.
We are indeed imagining that they begin 2027 only about as data-efficient as they are today, but then improve significantly over the course of 2027 reaching superhuman data-efficiency by the end. We originally were going to write “data-efficiency” in that footnote but had trouble deciding on a good definition of it, so we went with compute-efficiency instead.